00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1815 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3076 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.149 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.151 The recommended git tool is: git 00:00:00.152 using credential 00000000-0000-0000-0000-000000000002 00:00:00.154 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.176 Fetching changes from the remote Git repository 00:00:00.178 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.218 Using shallow fetch with depth 1 00:00:00.218 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.218 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.238 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.238 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.043 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.054 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.064 Checking out Revision 71481c63295b6b9f0ecef6c6e69e033a6109160a (FETCH_HEAD) 00:00:06.064 > git config core.sparsecheckout # timeout=10 00:00:06.073 > git read-tree -mu HEAD # timeout=10 00:00:06.088 > git checkout -f 71481c63295b6b9f0ecef6c6e69e033a6109160a # timeout=5 00:00:06.104 Commit message: "jenkins/jjb-config: Disable bsc job until further notice" 00:00:06.104 > git rev-list --no-walk 71481c63295b6b9f0ecef6c6e69e033a6109160a # timeout=10 00:00:06.169 [Pipeline] Start of Pipeline 00:00:06.180 [Pipeline] library 00:00:06.181 Loading library shm_lib@master 00:00:06.181 Library shm_lib@master is cached. Copying from home. 00:00:06.198 [Pipeline] node 00:00:06.208 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.210 [Pipeline] { 00:00:06.221 [Pipeline] catchError 00:00:06.223 [Pipeline] { 00:00:06.235 [Pipeline] wrap 00:00:06.244 [Pipeline] { 00:00:06.251 [Pipeline] stage 00:00:06.253 [Pipeline] { (Prologue) 00:00:06.413 [Pipeline] sh 00:00:06.692 + logger -p user.info -t JENKINS-CI 00:00:06.710 [Pipeline] echo 00:00:06.712 Node: GP11 00:00:06.720 [Pipeline] sh 00:00:07.020 [Pipeline] setCustomBuildProperty 00:00:07.031 [Pipeline] echo 00:00:07.032 Cleanup processes 00:00:07.036 [Pipeline] sh 00:00:07.324 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.324 2833305 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.338 [Pipeline] sh 00:00:07.618 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.618 ++ grep -v 'sudo pgrep' 00:00:07.618 ++ awk '{print $1}' 00:00:07.618 + sudo kill -9 00:00:07.618 + true 00:00:07.633 [Pipeline] cleanWs 00:00:07.643 [WS-CLEANUP] Deleting project workspace... 00:00:07.644 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.650 [WS-CLEANUP] done 00:00:07.654 [Pipeline] setCustomBuildProperty 00:00:07.669 [Pipeline] sh 00:00:07.952 + sudo git config --global --replace-all safe.directory '*' 00:00:08.023 [Pipeline] nodesByLabel 00:00:08.024 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.034 [Pipeline] httpRequest 00:00:08.039 HttpMethod: GET 00:00:08.040 URL: http://10.211.164.101/packages/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:08.055 Sending request to url: http://10.211.164.101/packages/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:08.123 Response Code: HTTP/1.1 200 OK 00:00:08.124 Success: Status code 200 is in the accepted range: 200,404 00:00:08.124 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:35.009 [Pipeline] sh 00:00:35.292 + tar --no-same-owner -xf jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:35.311 [Pipeline] httpRequest 00:00:35.316 HttpMethod: GET 00:00:35.316 URL: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:35.317 Sending request to url: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:35.321 Response Code: HTTP/1.1 200 OK 00:00:35.321 Success: Status code 200 is in the accepted range: 200,404 00:00:35.322 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:02.200 [Pipeline] sh 00:01:02.487 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:05.032 [Pipeline] sh 00:01:05.316 + git -C spdk log --oneline -n5 00:01:05.317 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:01:05.317 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:01:05.317 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:01:05.317 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:01:05.317 3b33f4333 test/nvme/cuse: Fix typo 00:01:05.372 [Pipeline] } 00:01:05.391 [Pipeline] // stage 00:01:05.397 [Pipeline] stage 00:01:05.398 [Pipeline] { (Prepare) 00:01:05.408 [Pipeline] writeFile 00:01:05.417 [Pipeline] sh 00:01:05.693 + logger -p user.info -t JENKINS-CI 00:01:05.704 [Pipeline] sh 00:01:05.985 + logger -p user.info -t JENKINS-CI 00:01:05.998 [Pipeline] sh 00:01:06.282 + cat autorun-spdk.conf 00:01:06.282 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.282 SPDK_TEST_NVMF=1 00:01:06.282 SPDK_TEST_NVME_CLI=1 00:01:06.282 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.282 SPDK_TEST_NVMF_NICS=e810 00:01:06.282 SPDK_RUN_UBSAN=1 00:01:06.282 NET_TYPE=phy 00:01:06.290 RUN_NIGHTLY=1 00:01:06.294 [Pipeline] readFile 00:01:06.319 [Pipeline] withEnv 00:01:06.321 [Pipeline] { 00:01:06.335 [Pipeline] sh 00:01:06.621 + set -ex 00:01:06.621 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:06.621 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:06.621 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.621 ++ SPDK_TEST_NVMF=1 00:01:06.621 ++ SPDK_TEST_NVME_CLI=1 00:01:06.621 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.621 ++ SPDK_TEST_NVMF_NICS=e810 00:01:06.621 ++ SPDK_RUN_UBSAN=1 00:01:06.621 ++ NET_TYPE=phy 00:01:06.621 ++ RUN_NIGHTLY=1 00:01:06.621 + case $SPDK_TEST_NVMF_NICS in 00:01:06.621 + DRIVERS=ice 00:01:06.621 + [[ tcp == \r\d\m\a ]] 00:01:06.621 + [[ -n ice ]] 00:01:06.621 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:06.621 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:06.621 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:06.621 rmmod: ERROR: Module irdma is not currently loaded 00:01:06.621 rmmod: ERROR: Module i40iw is not currently loaded 00:01:06.621 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:06.621 + true 00:01:06.621 + for D in $DRIVERS 00:01:06.621 + sudo modprobe ice 00:01:06.621 + exit 0 00:01:06.631 [Pipeline] } 00:01:06.648 [Pipeline] // withEnv 00:01:06.652 [Pipeline] } 00:01:06.675 [Pipeline] // stage 00:01:06.684 [Pipeline] catchError 00:01:06.686 [Pipeline] { 00:01:06.698 [Pipeline] timeout 00:01:06.698 Timeout set to expire in 40 min 00:01:06.699 [Pipeline] { 00:01:06.712 [Pipeline] stage 00:01:06.714 [Pipeline] { (Tests) 00:01:06.727 [Pipeline] sh 00:01:07.009 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:07.009 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:07.009 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:07.009 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:07.009 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.009 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:07.009 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:07.009 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:07.009 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:07.009 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:07.009 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:07.009 + source /etc/os-release 00:01:07.009 ++ NAME='Fedora Linux' 00:01:07.009 ++ VERSION='38 (Cloud Edition)' 00:01:07.009 ++ ID=fedora 00:01:07.009 ++ VERSION_ID=38 00:01:07.009 ++ VERSION_CODENAME= 00:01:07.009 ++ PLATFORM_ID=platform:f38 00:01:07.009 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:07.009 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:07.009 ++ LOGO=fedora-logo-icon 00:01:07.009 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:07.009 ++ HOME_URL=https://fedoraproject.org/ 00:01:07.009 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:07.009 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:07.009 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:07.009 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:07.009 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:07.009 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:07.009 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:07.009 ++ SUPPORT_END=2024-05-14 00:01:07.009 ++ VARIANT='Cloud Edition' 00:01:07.009 ++ VARIANT_ID=cloud 00:01:07.009 + uname -a 00:01:07.009 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:07.009 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:07.962 Hugepages 00:01:07.962 node hugesize free / total 00:01:07.962 node0 1048576kB 0 / 0 00:01:07.962 node0 2048kB 0 / 0 00:01:07.962 node1 1048576kB 0 / 0 00:01:07.962 node1 2048kB 0 / 0 00:01:07.962 00:01:07.962 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:07.962 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:07.962 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:07.962 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:07.962 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:07.962 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:07.962 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:07.963 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:07.963 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:07.963 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:07.963 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:07.963 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:07.963 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:07.963 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:07.963 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:07.963 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:07.963 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:07.963 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:07.963 + rm -f /tmp/spdk-ld-path 00:01:07.963 + source autorun-spdk.conf 00:01:07.963 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.963 ++ SPDK_TEST_NVMF=1 00:01:07.963 ++ SPDK_TEST_NVME_CLI=1 00:01:07.963 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.963 ++ SPDK_TEST_NVMF_NICS=e810 00:01:07.963 ++ SPDK_RUN_UBSAN=1 00:01:07.963 ++ NET_TYPE=phy 00:01:07.963 ++ RUN_NIGHTLY=1 00:01:07.963 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:07.963 + [[ -n '' ]] 00:01:07.963 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.963 + for M in /var/spdk/build-*-manifest.txt 00:01:07.963 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:07.963 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.963 + for M in /var/spdk/build-*-manifest.txt 00:01:07.963 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:07.963 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.963 ++ uname 00:01:07.963 + [[ Linux == \L\i\n\u\x ]] 00:01:07.963 + sudo dmesg -T 00:01:07.963 + sudo dmesg --clear 00:01:07.963 + dmesg_pid=2833962 00:01:07.963 + [[ Fedora Linux == FreeBSD ]] 00:01:07.963 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:07.963 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:07.963 + sudo dmesg -Tw 00:01:07.963 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:07.963 + [[ -x /usr/src/fio-static/fio ]] 00:01:07.963 + export FIO_BIN=/usr/src/fio-static/fio 00:01:07.963 + FIO_BIN=/usr/src/fio-static/fio 00:01:07.963 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:07.963 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:07.963 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:07.963 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:07.963 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:07.963 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:07.963 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:07.963 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:07.963 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:07.963 Test configuration: 00:01:07.963 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.963 SPDK_TEST_NVMF=1 00:01:07.963 SPDK_TEST_NVME_CLI=1 00:01:07.963 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.963 SPDK_TEST_NVMF_NICS=e810 00:01:07.963 SPDK_RUN_UBSAN=1 00:01:07.963 NET_TYPE=phy 00:01:08.222 RUN_NIGHTLY=1 06:41:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:08.222 06:41:15 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:08.222 06:41:15 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:08.222 06:41:15 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:08.222 06:41:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.222 06:41:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.222 06:41:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.222 06:41:15 -- paths/export.sh@5 -- $ export PATH 00:01:08.222 06:41:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.222 06:41:15 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:08.222 06:41:15 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:08.222 06:41:15 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715488875.XXXXXX 00:01:08.222 06:41:15 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715488875.7eEoTk 00:01:08.222 06:41:15 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:08.222 06:41:15 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:08.222 06:41:15 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:08.222 06:41:15 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:08.222 06:41:15 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:08.222 06:41:15 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:08.222 06:41:15 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:08.222 06:41:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.222 06:41:15 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:08.222 06:41:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:08.222 06:41:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:08.222 06:41:15 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:08.222 06:41:15 -- spdk/autobuild.sh@16 -- $ date -u 00:01:08.222 Sun May 12 04:41:15 AM UTC 2024 00:01:08.222 06:41:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:08.222 LTS-24-g36faa8c31 00:01:08.222 06:41:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:08.222 06:41:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:08.222 06:41:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:08.222 06:41:15 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:08.222 06:41:15 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:08.222 06:41:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.222 ************************************ 00:01:08.222 START TEST ubsan 00:01:08.222 ************************************ 00:01:08.222 06:41:15 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:08.222 using ubsan 00:01:08.222 00:01:08.222 real 0m0.000s 00:01:08.222 user 0m0.000s 00:01:08.222 sys 0m0.000s 00:01:08.222 06:41:15 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:08.222 06:41:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.222 ************************************ 00:01:08.222 END TEST ubsan 00:01:08.222 ************************************ 00:01:08.222 06:41:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:08.222 06:41:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:08.222 06:41:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:08.222 06:41:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:08.222 06:41:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:08.222 06:41:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:08.222 06:41:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:08.222 06:41:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:08.222 06:41:15 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:08.222 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:08.222 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:08.482 Using 'verbs' RDMA provider 00:01:19.035 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:29.018 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:29.018 Creating mk/config.mk...done. 00:01:29.018 Creating mk/cc.flags.mk...done. 00:01:29.018 Type 'make' to build. 00:01:29.018 06:41:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:29.018 06:41:35 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:29.018 06:41:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:29.018 06:41:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.018 ************************************ 00:01:29.018 START TEST make 00:01:29.018 ************************************ 00:01:29.018 06:41:35 -- common/autotest_common.sh@1104 -- $ make -j48 00:01:29.018 make[1]: Nothing to be done for 'all'. 00:01:37.157 The Meson build system 00:01:37.157 Version: 1.3.1 00:01:37.157 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:37.157 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:37.157 Build type: native build 00:01:37.157 Program cat found: YES (/usr/bin/cat) 00:01:37.157 Project name: DPDK 00:01:37.157 Project version: 23.11.0 00:01:37.157 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:37.157 C linker for the host machine: cc ld.bfd 2.39-16 00:01:37.157 Host machine cpu family: x86_64 00:01:37.157 Host machine cpu: x86_64 00:01:37.157 Message: ## Building in Developer Mode ## 00:01:37.157 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:37.157 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:37.157 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:37.157 Program python3 found: YES (/usr/bin/python3) 00:01:37.157 Program cat found: YES (/usr/bin/cat) 00:01:37.157 Compiler for C supports arguments -march=native: YES 00:01:37.157 Checking for size of "void *" : 8 00:01:37.157 Checking for size of "void *" : 8 (cached) 00:01:37.157 Library m found: YES 00:01:37.157 Library numa found: YES 00:01:37.157 Has header "numaif.h" : YES 00:01:37.157 Library fdt found: NO 00:01:37.157 Library execinfo found: NO 00:01:37.157 Has header "execinfo.h" : YES 00:01:37.157 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:37.157 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:37.157 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:37.157 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:37.157 Run-time dependency openssl found: YES 3.0.9 00:01:37.157 Run-time dependency libpcap found: YES 1.10.4 00:01:37.157 Has header "pcap.h" with dependency libpcap: YES 00:01:37.157 Compiler for C supports arguments -Wcast-qual: YES 00:01:37.157 Compiler for C supports arguments -Wdeprecated: YES 00:01:37.157 Compiler for C supports arguments -Wformat: YES 00:01:37.157 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:37.157 Compiler for C supports arguments -Wformat-security: NO 00:01:37.157 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.157 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:37.157 Compiler for C supports arguments -Wnested-externs: YES 00:01:37.157 Compiler for C supports arguments -Wold-style-definition: YES 00:01:37.157 Compiler for C supports arguments -Wpointer-arith: YES 00:01:37.157 Compiler for C supports arguments -Wsign-compare: YES 00:01:37.157 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:37.157 Compiler for C supports arguments -Wundef: YES 00:01:37.157 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.157 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:37.157 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:37.157 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.157 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:37.157 Program objdump found: YES (/usr/bin/objdump) 00:01:37.157 Compiler for C supports arguments -mavx512f: YES 00:01:37.157 Checking if "AVX512 checking" compiles: YES 00:01:37.157 Fetching value of define "__SSE4_2__" : 1 00:01:37.157 Fetching value of define "__AES__" : 1 00:01:37.157 Fetching value of define "__AVX__" : 1 00:01:37.157 Fetching value of define "__AVX2__" : (undefined) 00:01:37.157 Fetching value of define "__AVX512BW__" : (undefined) 00:01:37.157 Fetching value of define "__AVX512CD__" : (undefined) 00:01:37.157 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:37.157 Fetching value of define "__AVX512F__" : (undefined) 00:01:37.157 Fetching value of define "__AVX512VL__" : (undefined) 00:01:37.157 Fetching value of define "__PCLMUL__" : 1 00:01:37.157 Fetching value of define "__RDRND__" : 1 00:01:37.157 Fetching value of define "__RDSEED__" : (undefined) 00:01:37.157 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:37.157 Fetching value of define "__znver1__" : (undefined) 00:01:37.157 Fetching value of define "__znver2__" : (undefined) 00:01:37.157 Fetching value of define "__znver3__" : (undefined) 00:01:37.157 Fetching value of define "__znver4__" : (undefined) 00:01:37.157 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:37.157 Message: lib/log: Defining dependency "log" 00:01:37.157 Message: lib/kvargs: Defining dependency "kvargs" 00:01:37.157 Message: lib/telemetry: Defining dependency "telemetry" 00:01:37.157 Checking for function "getentropy" : NO 00:01:37.157 Message: lib/eal: Defining dependency "eal" 00:01:37.157 Message: lib/ring: Defining dependency "ring" 00:01:37.157 Message: lib/rcu: Defining dependency "rcu" 00:01:37.157 Message: lib/mempool: Defining dependency "mempool" 00:01:37.157 Message: lib/mbuf: Defining dependency "mbuf" 00:01:37.157 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:37.157 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:37.157 Compiler for C supports arguments -mpclmul: YES 00:01:37.157 Compiler for C supports arguments -maes: YES 00:01:37.157 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.157 Compiler for C supports arguments -mavx512bw: YES 00:01:37.157 Compiler for C supports arguments -mavx512dq: YES 00:01:37.157 Compiler for C supports arguments -mavx512vl: YES 00:01:37.157 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:37.157 Compiler for C supports arguments -mavx2: YES 00:01:37.157 Compiler for C supports arguments -mavx: YES 00:01:37.157 Message: lib/net: Defining dependency "net" 00:01:37.157 Message: lib/meter: Defining dependency "meter" 00:01:37.157 Message: lib/ethdev: Defining dependency "ethdev" 00:01:37.157 Message: lib/pci: Defining dependency "pci" 00:01:37.157 Message: lib/cmdline: Defining dependency "cmdline" 00:01:37.157 Message: lib/hash: Defining dependency "hash" 00:01:37.157 Message: lib/timer: Defining dependency "timer" 00:01:37.157 Message: lib/compressdev: Defining dependency "compressdev" 00:01:37.157 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:37.157 Message: lib/dmadev: Defining dependency "dmadev" 00:01:37.157 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:37.157 Message: lib/power: Defining dependency "power" 00:01:37.157 Message: lib/reorder: Defining dependency "reorder" 00:01:37.157 Message: lib/security: Defining dependency "security" 00:01:37.157 Has header "linux/userfaultfd.h" : YES 00:01:37.157 Has header "linux/vduse.h" : YES 00:01:37.157 Message: lib/vhost: Defining dependency "vhost" 00:01:37.157 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:37.157 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:37.157 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:37.157 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.157 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:37.157 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:37.157 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:37.157 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:37.157 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:37.157 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:37.157 Program doxygen found: YES (/usr/bin/doxygen) 00:01:37.157 Configuring doxy-api-html.conf using configuration 00:01:37.157 Configuring doxy-api-man.conf using configuration 00:01:37.157 Program mandb found: YES (/usr/bin/mandb) 00:01:37.158 Program sphinx-build found: NO 00:01:37.158 Configuring rte_build_config.h using configuration 00:01:37.158 Message: 00:01:37.158 ================= 00:01:37.158 Applications Enabled 00:01:37.158 ================= 00:01:37.158 00:01:37.158 apps: 00:01:37.158 00:01:37.158 00:01:37.158 Message: 00:01:37.158 ================= 00:01:37.158 Libraries Enabled 00:01:37.158 ================= 00:01:37.158 00:01:37.158 libs: 00:01:37.158 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:37.158 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:37.158 cryptodev, dmadev, power, reorder, security, vhost, 00:01:37.158 00:01:37.158 Message: 00:01:37.158 =============== 00:01:37.158 Drivers Enabled 00:01:37.158 =============== 00:01:37.158 00:01:37.158 common: 00:01:37.158 00:01:37.158 bus: 00:01:37.158 pci, vdev, 00:01:37.158 mempool: 00:01:37.158 ring, 00:01:37.158 dma: 00:01:37.158 00:01:37.158 net: 00:01:37.158 00:01:37.158 crypto: 00:01:37.158 00:01:37.158 compress: 00:01:37.158 00:01:37.158 vdpa: 00:01:37.158 00:01:37.158 00:01:37.158 Message: 00:01:37.158 ================= 00:01:37.158 Content Skipped 00:01:37.158 ================= 00:01:37.158 00:01:37.158 apps: 00:01:37.158 dumpcap: explicitly disabled via build config 00:01:37.158 graph: explicitly disabled via build config 00:01:37.158 pdump: explicitly disabled via build config 00:01:37.158 proc-info: explicitly disabled via build config 00:01:37.158 test-acl: explicitly disabled via build config 00:01:37.158 test-bbdev: explicitly disabled via build config 00:01:37.158 test-cmdline: explicitly disabled via build config 00:01:37.158 test-compress-perf: explicitly disabled via build config 00:01:37.158 test-crypto-perf: explicitly disabled via build config 00:01:37.158 test-dma-perf: explicitly disabled via build config 00:01:37.158 test-eventdev: explicitly disabled via build config 00:01:37.158 test-fib: explicitly disabled via build config 00:01:37.158 test-flow-perf: explicitly disabled via build config 00:01:37.158 test-gpudev: explicitly disabled via build config 00:01:37.158 test-mldev: explicitly disabled via build config 00:01:37.158 test-pipeline: explicitly disabled via build config 00:01:37.158 test-pmd: explicitly disabled via build config 00:01:37.158 test-regex: explicitly disabled via build config 00:01:37.158 test-sad: explicitly disabled via build config 00:01:37.158 test-security-perf: explicitly disabled via build config 00:01:37.158 00:01:37.158 libs: 00:01:37.158 metrics: explicitly disabled via build config 00:01:37.158 acl: explicitly disabled via build config 00:01:37.158 bbdev: explicitly disabled via build config 00:01:37.158 bitratestats: explicitly disabled via build config 00:01:37.158 bpf: explicitly disabled via build config 00:01:37.158 cfgfile: explicitly disabled via build config 00:01:37.158 distributor: explicitly disabled via build config 00:01:37.158 efd: explicitly disabled via build config 00:01:37.158 eventdev: explicitly disabled via build config 00:01:37.158 dispatcher: explicitly disabled via build config 00:01:37.158 gpudev: explicitly disabled via build config 00:01:37.158 gro: explicitly disabled via build config 00:01:37.158 gso: explicitly disabled via build config 00:01:37.158 ip_frag: explicitly disabled via build config 00:01:37.158 jobstats: explicitly disabled via build config 00:01:37.158 latencystats: explicitly disabled via build config 00:01:37.158 lpm: explicitly disabled via build config 00:01:37.158 member: explicitly disabled via build config 00:01:37.158 pcapng: explicitly disabled via build config 00:01:37.158 rawdev: explicitly disabled via build config 00:01:37.158 regexdev: explicitly disabled via build config 00:01:37.158 mldev: explicitly disabled via build config 00:01:37.158 rib: explicitly disabled via build config 00:01:37.158 sched: explicitly disabled via build config 00:01:37.158 stack: explicitly disabled via build config 00:01:37.158 ipsec: explicitly disabled via build config 00:01:37.158 pdcp: explicitly disabled via build config 00:01:37.158 fib: explicitly disabled via build config 00:01:37.158 port: explicitly disabled via build config 00:01:37.158 pdump: explicitly disabled via build config 00:01:37.158 table: explicitly disabled via build config 00:01:37.158 pipeline: explicitly disabled via build config 00:01:37.158 graph: explicitly disabled via build config 00:01:37.158 node: explicitly disabled via build config 00:01:37.158 00:01:37.158 drivers: 00:01:37.158 common/cpt: not in enabled drivers build config 00:01:37.158 common/dpaax: not in enabled drivers build config 00:01:37.158 common/iavf: not in enabled drivers build config 00:01:37.158 common/idpf: not in enabled drivers build config 00:01:37.158 common/mvep: not in enabled drivers build config 00:01:37.158 common/octeontx: not in enabled drivers build config 00:01:37.158 bus/auxiliary: not in enabled drivers build config 00:01:37.158 bus/cdx: not in enabled drivers build config 00:01:37.158 bus/dpaa: not in enabled drivers build config 00:01:37.158 bus/fslmc: not in enabled drivers build config 00:01:37.158 bus/ifpga: not in enabled drivers build config 00:01:37.158 bus/platform: not in enabled drivers build config 00:01:37.158 bus/vmbus: not in enabled drivers build config 00:01:37.158 common/cnxk: not in enabled drivers build config 00:01:37.158 common/mlx5: not in enabled drivers build config 00:01:37.158 common/nfp: not in enabled drivers build config 00:01:37.158 common/qat: not in enabled drivers build config 00:01:37.158 common/sfc_efx: not in enabled drivers build config 00:01:37.158 mempool/bucket: not in enabled drivers build config 00:01:37.158 mempool/cnxk: not in enabled drivers build config 00:01:37.158 mempool/dpaa: not in enabled drivers build config 00:01:37.158 mempool/dpaa2: not in enabled drivers build config 00:01:37.158 mempool/octeontx: not in enabled drivers build config 00:01:37.158 mempool/stack: not in enabled drivers build config 00:01:37.158 dma/cnxk: not in enabled drivers build config 00:01:37.158 dma/dpaa: not in enabled drivers build config 00:01:37.158 dma/dpaa2: not in enabled drivers build config 00:01:37.158 dma/hisilicon: not in enabled drivers build config 00:01:37.158 dma/idxd: not in enabled drivers build config 00:01:37.158 dma/ioat: not in enabled drivers build config 00:01:37.158 dma/skeleton: not in enabled drivers build config 00:01:37.158 net/af_packet: not in enabled drivers build config 00:01:37.158 net/af_xdp: not in enabled drivers build config 00:01:37.158 net/ark: not in enabled drivers build config 00:01:37.158 net/atlantic: not in enabled drivers build config 00:01:37.158 net/avp: not in enabled drivers build config 00:01:37.158 net/axgbe: not in enabled drivers build config 00:01:37.158 net/bnx2x: not in enabled drivers build config 00:01:37.158 net/bnxt: not in enabled drivers build config 00:01:37.158 net/bonding: not in enabled drivers build config 00:01:37.158 net/cnxk: not in enabled drivers build config 00:01:37.158 net/cpfl: not in enabled drivers build config 00:01:37.158 net/cxgbe: not in enabled drivers build config 00:01:37.158 net/dpaa: not in enabled drivers build config 00:01:37.158 net/dpaa2: not in enabled drivers build config 00:01:37.158 net/e1000: not in enabled drivers build config 00:01:37.158 net/ena: not in enabled drivers build config 00:01:37.158 net/enetc: not in enabled drivers build config 00:01:37.158 net/enetfec: not in enabled drivers build config 00:01:37.158 net/enic: not in enabled drivers build config 00:01:37.158 net/failsafe: not in enabled drivers build config 00:01:37.158 net/fm10k: not in enabled drivers build config 00:01:37.158 net/gve: not in enabled drivers build config 00:01:37.158 net/hinic: not in enabled drivers build config 00:01:37.158 net/hns3: not in enabled drivers build config 00:01:37.158 net/i40e: not in enabled drivers build config 00:01:37.158 net/iavf: not in enabled drivers build config 00:01:37.158 net/ice: not in enabled drivers build config 00:01:37.158 net/idpf: not in enabled drivers build config 00:01:37.158 net/igc: not in enabled drivers build config 00:01:37.158 net/ionic: not in enabled drivers build config 00:01:37.158 net/ipn3ke: not in enabled drivers build config 00:01:37.158 net/ixgbe: not in enabled drivers build config 00:01:37.158 net/mana: not in enabled drivers build config 00:01:37.158 net/memif: not in enabled drivers build config 00:01:37.158 net/mlx4: not in enabled drivers build config 00:01:37.158 net/mlx5: not in enabled drivers build config 00:01:37.158 net/mvneta: not in enabled drivers build config 00:01:37.158 net/mvpp2: not in enabled drivers build config 00:01:37.158 net/netvsc: not in enabled drivers build config 00:01:37.158 net/nfb: not in enabled drivers build config 00:01:37.158 net/nfp: not in enabled drivers build config 00:01:37.158 net/ngbe: not in enabled drivers build config 00:01:37.158 net/null: not in enabled drivers build config 00:01:37.158 net/octeontx: not in enabled drivers build config 00:01:37.158 net/octeon_ep: not in enabled drivers build config 00:01:37.158 net/pcap: not in enabled drivers build config 00:01:37.158 net/pfe: not in enabled drivers build config 00:01:37.158 net/qede: not in enabled drivers build config 00:01:37.158 net/ring: not in enabled drivers build config 00:01:37.158 net/sfc: not in enabled drivers build config 00:01:37.158 net/softnic: not in enabled drivers build config 00:01:37.158 net/tap: not in enabled drivers build config 00:01:37.158 net/thunderx: not in enabled drivers build config 00:01:37.158 net/txgbe: not in enabled drivers build config 00:01:37.158 net/vdev_netvsc: not in enabled drivers build config 00:01:37.158 net/vhost: not in enabled drivers build config 00:01:37.158 net/virtio: not in enabled drivers build config 00:01:37.158 net/vmxnet3: not in enabled drivers build config 00:01:37.158 raw/*: missing internal dependency, "rawdev" 00:01:37.158 crypto/armv8: not in enabled drivers build config 00:01:37.158 crypto/bcmfs: not in enabled drivers build config 00:01:37.158 crypto/caam_jr: not in enabled drivers build config 00:01:37.158 crypto/ccp: not in enabled drivers build config 00:01:37.159 crypto/cnxk: not in enabled drivers build config 00:01:37.159 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.159 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.159 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.159 crypto/mlx5: not in enabled drivers build config 00:01:37.159 crypto/mvsam: not in enabled drivers build config 00:01:37.159 crypto/nitrox: not in enabled drivers build config 00:01:37.159 crypto/null: not in enabled drivers build config 00:01:37.159 crypto/octeontx: not in enabled drivers build config 00:01:37.159 crypto/openssl: not in enabled drivers build config 00:01:37.159 crypto/scheduler: not in enabled drivers build config 00:01:37.159 crypto/uadk: not in enabled drivers build config 00:01:37.159 crypto/virtio: not in enabled drivers build config 00:01:37.159 compress/isal: not in enabled drivers build config 00:01:37.159 compress/mlx5: not in enabled drivers build config 00:01:37.159 compress/octeontx: not in enabled drivers build config 00:01:37.159 compress/zlib: not in enabled drivers build config 00:01:37.159 regex/*: missing internal dependency, "regexdev" 00:01:37.159 ml/*: missing internal dependency, "mldev" 00:01:37.159 vdpa/ifc: not in enabled drivers build config 00:01:37.159 vdpa/mlx5: not in enabled drivers build config 00:01:37.159 vdpa/nfp: not in enabled drivers build config 00:01:37.159 vdpa/sfc: not in enabled drivers build config 00:01:37.159 event/*: missing internal dependency, "eventdev" 00:01:37.159 baseband/*: missing internal dependency, "bbdev" 00:01:37.159 gpu/*: missing internal dependency, "gpudev" 00:01:37.159 00:01:37.159 00:01:37.417 Build targets in project: 85 00:01:37.417 00:01:37.417 DPDK 23.11.0 00:01:37.417 00:01:37.417 User defined options 00:01:37.417 buildtype : debug 00:01:37.417 default_library : shared 00:01:37.417 libdir : lib 00:01:37.417 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:37.417 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:37.417 c_link_args : 00:01:37.417 cpu_instruction_set: native 00:01:37.417 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:37.417 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:37.417 enable_docs : false 00:01:37.417 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:37.417 enable_kmods : false 00:01:37.417 tests : false 00:01:37.417 00:01:37.417 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.992 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:37.992 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.992 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.992 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.992 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.992 [5/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.992 [6/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.992 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.992 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.992 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.992 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.992 [11/265] Linking static target lib/librte_kvargs.a 00:01:37.992 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.992 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.992 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.992 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.992 [16/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:37.992 [17/265] Linking static target lib/librte_log.a 00:01:37.992 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:37.992 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:37.992 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.256 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:38.518 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.779 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:38.779 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:38.779 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:38.779 [26/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.779 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:38.779 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:38.779 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.779 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:38.779 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:38.779 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.779 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:38.779 [34/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.779 [35/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:38.779 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:38.779 [37/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:38.779 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:38.779 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.779 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:38.779 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.779 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:38.779 [43/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:38.779 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:38.779 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:38.779 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.779 [47/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.779 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:38.779 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.779 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:38.779 [51/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:38.779 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:38.779 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.779 [54/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:39.039 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:39.039 [56/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:39.039 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:39.039 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:39.039 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:39.039 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:39.039 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:39.039 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:39.039 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:39.039 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:39.039 [65/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:39.039 [66/265] Linking static target lib/librte_telemetry.a 00:01:39.039 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.039 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:39.039 [69/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:39.039 [70/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:39.039 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:39.039 [72/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.039 [73/265] Linking static target lib/librte_pci.a 00:01:39.303 [74/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:39.303 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:39.303 [76/265] Linking target lib/librte_log.so.24.0 00:01:39.303 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:39.303 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:39.303 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:39.303 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:39.303 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:39.303 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:39.303 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:39.303 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:39.303 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:39.564 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:39.564 [87/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:39.564 [88/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:39.564 [89/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.564 [90/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:39.564 [91/265] Linking target lib/librte_kvargs.so.24.0 00:01:39.564 [92/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:39.564 [93/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:39.824 [94/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:39.824 [95/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.824 [96/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:39.824 [97/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:39.824 [98/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:39.824 [99/265] Linking static target lib/librte_ring.a 00:01:39.824 [100/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.824 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:39.824 [102/265] Linking static target lib/librte_meter.a 00:01:39.824 [103/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:39.824 [104/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:39.824 [105/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:39.824 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.824 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.824 [108/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:39.824 [109/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:39.824 [110/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.082 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:40.082 [112/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.082 [113/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.082 [114/265] Linking static target lib/librte_eal.a 00:01:40.082 [115/265] Linking static target lib/librte_mempool.a 00:01:40.082 [116/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.082 [117/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.082 [118/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.082 [119/265] Linking static target lib/librte_rcu.a 00:01:40.082 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.082 [121/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.082 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.082 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.082 [124/265] Linking target lib/librte_telemetry.so.24.0 00:01:40.082 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.082 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.082 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.082 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.082 [129/265] Linking static target lib/librte_cmdline.a 00:01:40.082 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.082 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.082 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.082 [133/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.350 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:40.350 [135/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.350 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.350 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.350 [138/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:40.350 [139/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.350 [140/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.350 [141/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.350 [142/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.350 [143/265] Linking static target lib/librte_net.a 00:01:40.608 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.608 [145/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.608 [146/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.608 [147/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.608 [148/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.608 [149/265] Linking static target lib/librte_timer.a 00:01:40.608 [150/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:40.608 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:40.608 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:40.608 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:40.868 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.868 [155/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:40.868 [156/265] Linking static target lib/librte_dmadev.a 00:01:40.868 [157/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.868 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:40.868 [159/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:40.868 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:40.868 [161/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:40.868 [162/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.868 [163/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:40.868 [164/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.868 [165/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:40.868 [166/265] Linking static target lib/librte_hash.a 00:01:40.868 [167/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.156 [168/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:41.156 [169/265] Linking static target lib/librte_compressdev.a 00:01:41.156 [170/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:41.156 [171/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:41.156 [172/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:41.156 [173/265] Linking static target lib/librte_power.a 00:01:41.156 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:41.156 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:41.156 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:41.156 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:41.156 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:41.156 [179/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.156 [180/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:41.156 [181/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.156 [182/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:41.156 [183/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:41.415 [184/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:41.415 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:41.415 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:41.415 [187/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:41.415 [188/265] Linking static target lib/librte_reorder.a 00:01:41.415 [189/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:41.415 [190/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:41.415 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:41.415 [192/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.415 [193/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:41.415 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:41.415 [195/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:41.415 [196/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.415 [197/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.415 [198/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:41.415 [199/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.415 [200/265] Linking static target drivers/librte_bus_vdev.a 00:01:41.415 [201/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:41.415 [202/265] Linking static target lib/librte_mbuf.a 00:01:41.673 [203/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:41.673 [204/265] Linking static target lib/librte_security.a 00:01:41.673 [205/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:41.673 [206/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.673 [207/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.673 [208/265] Linking static target drivers/librte_bus_pci.a 00:01:41.673 [209/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.673 [210/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.673 [211/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:41.673 [212/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.673 [213/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.673 [214/265] Linking static target drivers/librte_mempool_ring.a 00:01:41.673 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.673 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:41.673 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.931 [218/265] Linking static target lib/librte_ethdev.a 00:01:41.931 [219/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.931 [220/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.931 [221/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.931 [222/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:41.931 [223/265] Linking static target lib/librte_cryptodev.a 00:01:43.304 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.237 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:46.137 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.137 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.137 [228/265] Linking target lib/librte_eal.so.24.0 00:01:46.396 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:46.396 [230/265] Linking target lib/librte_ring.so.24.0 00:01:46.396 [231/265] Linking target lib/librte_pci.so.24.0 00:01:46.396 [232/265] Linking target lib/librte_meter.so.24.0 00:01:46.396 [233/265] Linking target lib/librte_timer.so.24.0 00:01:46.396 [234/265] Linking target lib/librte_dmadev.so.24.0 00:01:46.396 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:46.396 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:46.396 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:46.396 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:46.396 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:46.396 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:46.396 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:46.396 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:46.396 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:46.654 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:46.654 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:46.654 [246/265] Linking target lib/librte_mbuf.so.24.0 00:01:46.654 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:46.912 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:46.912 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:46.912 [250/265] Linking target lib/librte_compressdev.so.24.0 00:01:46.912 [251/265] Linking target lib/librte_net.so.24.0 00:01:46.912 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:46.912 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:46.912 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:46.912 [255/265] Linking target lib/librte_hash.so.24.0 00:01:46.912 [256/265] Linking target lib/librte_security.so.24.0 00:01:46.912 [257/265] Linking target lib/librte_cmdline.so.24.0 00:01:46.912 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:47.171 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:47.171 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:47.171 [261/265] Linking target lib/librte_power.so.24.0 00:01:49.703 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:49.703 [263/265] Linking static target lib/librte_vhost.a 00:01:50.639 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.639 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:50.639 INFO: autodetecting backend as ninja 00:01:50.639 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:51.576 CC lib/log/log.o 00:01:51.576 CC lib/log/log_flags.o 00:01:51.576 CC lib/ut_mock/mock.o 00:01:51.576 CC lib/log/log_deprecated.o 00:01:51.576 CC lib/ut/ut.o 00:01:51.576 LIB libspdk_ut_mock.a 00:01:51.576 LIB libspdk_log.a 00:01:51.576 SO libspdk_ut_mock.so.5.0 00:01:51.576 LIB libspdk_ut.a 00:01:51.576 SO libspdk_ut.so.1.0 00:01:51.576 SO libspdk_log.so.6.1 00:01:51.576 SYMLINK libspdk_ut_mock.so 00:01:51.576 SYMLINK libspdk_ut.so 00:01:51.576 SYMLINK libspdk_log.so 00:01:51.835 CC lib/dma/dma.o 00:01:51.835 CXX lib/trace_parser/trace.o 00:01:51.835 CC lib/ioat/ioat.o 00:01:51.835 CC lib/util/base64.o 00:01:51.835 CC lib/util/bit_array.o 00:01:51.835 CC lib/util/cpuset.o 00:01:51.835 CC lib/util/crc16.o 00:01:51.835 CC lib/util/crc32.o 00:01:51.835 CC lib/util/crc32c.o 00:01:51.835 CC lib/util/crc32_ieee.o 00:01:51.835 CC lib/util/crc64.o 00:01:51.835 CC lib/util/dif.o 00:01:51.835 CC lib/util/fd.o 00:01:51.835 CC lib/util/file.o 00:01:51.835 CC lib/util/hexlify.o 00:01:51.835 CC lib/util/iov.o 00:01:51.835 CC lib/util/math.o 00:01:51.835 CC lib/util/pipe.o 00:01:51.835 CC lib/util/strerror_tls.o 00:01:51.835 CC lib/util/string.o 00:01:51.835 CC lib/util/uuid.o 00:01:51.835 CC lib/util/fd_group.o 00:01:51.835 CC lib/util/xor.o 00:01:51.835 CC lib/util/zipf.o 00:01:51.835 CC lib/vfio_user/host/vfio_user.o 00:01:51.835 CC lib/vfio_user/host/vfio_user_pci.o 00:01:52.093 LIB libspdk_dma.a 00:01:52.093 SO libspdk_dma.so.3.0 00:01:52.093 SYMLINK libspdk_dma.so 00:01:52.093 LIB libspdk_ioat.a 00:01:52.093 SO libspdk_ioat.so.6.0 00:01:52.093 SYMLINK libspdk_ioat.so 00:01:52.093 LIB libspdk_vfio_user.a 00:01:52.093 SO libspdk_vfio_user.so.4.0 00:01:52.352 SYMLINK libspdk_vfio_user.so 00:01:52.352 LIB libspdk_util.a 00:01:52.352 SO libspdk_util.so.8.0 00:01:52.610 SYMLINK libspdk_util.so 00:01:52.610 CC lib/env_dpdk/env.o 00:01:52.610 CC lib/conf/conf.o 00:01:52.610 CC lib/json/json_parse.o 00:01:52.610 CC lib/rdma/common.o 00:01:52.610 CC lib/idxd/idxd.o 00:01:52.610 CC lib/vmd/vmd.o 00:01:52.611 CC lib/json/json_util.o 00:01:52.611 CC lib/env_dpdk/memory.o 00:01:52.611 CC lib/vmd/led.o 00:01:52.611 CC lib/rdma/rdma_verbs.o 00:01:52.611 CC lib/idxd/idxd_user.o 00:01:52.611 CC lib/env_dpdk/pci.o 00:01:52.611 CC lib/json/json_write.o 00:01:52.611 CC lib/env_dpdk/init.o 00:01:52.611 CC lib/env_dpdk/threads.o 00:01:52.611 CC lib/env_dpdk/pci_ioat.o 00:01:52.611 CC lib/env_dpdk/pci_virtio.o 00:01:52.611 CC lib/env_dpdk/pci_vmd.o 00:01:52.611 CC lib/env_dpdk/pci_idxd.o 00:01:52.611 CC lib/env_dpdk/pci_event.o 00:01:52.611 CC lib/env_dpdk/sigbus_handler.o 00:01:52.611 CC lib/env_dpdk/pci_dpdk.o 00:01:52.611 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:52.611 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:52.611 LIB libspdk_trace_parser.a 00:01:52.869 SO libspdk_trace_parser.so.4.0 00:01:52.870 SYMLINK libspdk_trace_parser.so 00:01:52.870 LIB libspdk_conf.a 00:01:52.870 SO libspdk_conf.so.5.0 00:01:52.870 LIB libspdk_rdma.a 00:01:52.870 LIB libspdk_json.a 00:01:52.870 SYMLINK libspdk_conf.so 00:01:52.870 SO libspdk_rdma.so.5.0 00:01:52.870 SO libspdk_json.so.5.1 00:01:53.128 SYMLINK libspdk_rdma.so 00:01:53.128 SYMLINK libspdk_json.so 00:01:53.128 CC lib/jsonrpc/jsonrpc_server.o 00:01:53.128 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:53.128 CC lib/jsonrpc/jsonrpc_client.o 00:01:53.128 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:53.128 LIB libspdk_idxd.a 00:01:53.386 SO libspdk_idxd.so.11.0 00:01:53.386 SYMLINK libspdk_idxd.so 00:01:53.386 LIB libspdk_vmd.a 00:01:53.386 SO libspdk_vmd.so.5.0 00:01:53.386 SYMLINK libspdk_vmd.so 00:01:53.386 LIB libspdk_jsonrpc.a 00:01:53.386 SO libspdk_jsonrpc.so.5.1 00:01:53.644 SYMLINK libspdk_jsonrpc.so 00:01:53.644 CC lib/rpc/rpc.o 00:01:53.902 LIB libspdk_rpc.a 00:01:53.902 SO libspdk_rpc.so.5.0 00:01:53.902 SYMLINK libspdk_rpc.so 00:01:53.902 CC lib/trace/trace.o 00:01:53.902 CC lib/trace/trace_flags.o 00:01:53.902 CC lib/sock/sock.o 00:01:53.902 CC lib/trace/trace_rpc.o 00:01:53.902 CC lib/notify/notify.o 00:01:53.902 CC lib/sock/sock_rpc.o 00:01:53.902 CC lib/notify/notify_rpc.o 00:01:54.159 LIB libspdk_notify.a 00:01:54.160 SO libspdk_notify.so.5.0 00:01:54.160 LIB libspdk_trace.a 00:01:54.160 SYMLINK libspdk_notify.so 00:01:54.160 SO libspdk_trace.so.9.0 00:01:54.418 SYMLINK libspdk_trace.so 00:01:54.418 LIB libspdk_sock.a 00:01:54.418 SO libspdk_sock.so.8.0 00:01:54.418 CC lib/thread/thread.o 00:01:54.418 CC lib/thread/iobuf.o 00:01:54.418 SYMLINK libspdk_sock.so 00:01:54.682 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:54.682 CC lib/nvme/nvme_ctrlr.o 00:01:54.682 CC lib/nvme/nvme_fabric.o 00:01:54.682 CC lib/nvme/nvme_ns_cmd.o 00:01:54.682 CC lib/nvme/nvme_ns.o 00:01:54.682 CC lib/nvme/nvme_pcie_common.o 00:01:54.682 CC lib/nvme/nvme_pcie.o 00:01:54.682 CC lib/nvme/nvme_qpair.o 00:01:54.682 CC lib/nvme/nvme.o 00:01:54.682 CC lib/nvme/nvme_quirks.o 00:01:54.682 CC lib/nvme/nvme_transport.o 00:01:54.682 CC lib/nvme/nvme_discovery.o 00:01:54.682 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:54.682 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:54.682 CC lib/nvme/nvme_tcp.o 00:01:54.682 CC lib/nvme/nvme_opal.o 00:01:54.682 CC lib/nvme/nvme_io_msg.o 00:01:54.682 CC lib/nvme/nvme_poll_group.o 00:01:54.682 CC lib/nvme/nvme_zns.o 00:01:54.682 CC lib/nvme/nvme_cuse.o 00:01:54.682 CC lib/nvme/nvme_vfio_user.o 00:01:54.682 CC lib/nvme/nvme_rdma.o 00:01:54.682 LIB libspdk_env_dpdk.a 00:01:54.682 SO libspdk_env_dpdk.so.13.0 00:01:54.950 SYMLINK libspdk_env_dpdk.so 00:01:55.886 LIB libspdk_thread.a 00:01:56.144 SO libspdk_thread.so.9.0 00:01:56.144 SYMLINK libspdk_thread.so 00:01:56.144 CC lib/virtio/virtio.o 00:01:56.144 CC lib/accel/accel.o 00:01:56.144 CC lib/virtio/virtio_vhost_user.o 00:01:56.144 CC lib/accel/accel_rpc.o 00:01:56.144 CC lib/virtio/virtio_vfio_user.o 00:01:56.144 CC lib/accel/accel_sw.o 00:01:56.144 CC lib/virtio/virtio_pci.o 00:01:56.144 CC lib/blob/blobstore.o 00:01:56.144 CC lib/init/json_config.o 00:01:56.144 CC lib/init/subsystem.o 00:01:56.144 CC lib/blob/request.o 00:01:56.144 CC lib/init/subsystem_rpc.o 00:01:56.144 CC lib/blob/zeroes.o 00:01:56.144 CC lib/init/rpc.o 00:01:56.144 CC lib/blob/blob_bs_dev.o 00:01:56.403 LIB libspdk_init.a 00:01:56.403 SO libspdk_init.so.4.0 00:01:56.661 LIB libspdk_virtio.a 00:01:56.661 SYMLINK libspdk_init.so 00:01:56.661 SO libspdk_virtio.so.6.0 00:01:56.661 SYMLINK libspdk_virtio.so 00:01:56.661 CC lib/event/app.o 00:01:56.661 CC lib/event/reactor.o 00:01:56.661 CC lib/event/log_rpc.o 00:01:56.661 CC lib/event/app_rpc.o 00:01:56.661 CC lib/event/scheduler_static.o 00:01:56.920 LIB libspdk_nvme.a 00:01:56.920 SO libspdk_nvme.so.12.0 00:01:57.179 LIB libspdk_event.a 00:01:57.179 SO libspdk_event.so.12.0 00:01:57.179 SYMLINK libspdk_event.so 00:01:57.179 SYMLINK libspdk_nvme.so 00:01:57.179 LIB libspdk_accel.a 00:01:57.179 SO libspdk_accel.so.14.0 00:01:57.437 SYMLINK libspdk_accel.so 00:01:57.437 CC lib/bdev/bdev.o 00:01:57.437 CC lib/bdev/bdev_rpc.o 00:01:57.437 CC lib/bdev/bdev_zone.o 00:01:57.437 CC lib/bdev/part.o 00:01:57.437 CC lib/bdev/scsi_nvme.o 00:01:58.814 LIB libspdk_blob.a 00:01:59.072 SO libspdk_blob.so.10.1 00:01:59.073 SYMLINK libspdk_blob.so 00:01:59.073 CC lib/lvol/lvol.o 00:01:59.073 CC lib/blobfs/blobfs.o 00:01:59.073 CC lib/blobfs/tree.o 00:02:00.009 LIB libspdk_blobfs.a 00:02:00.009 LIB libspdk_lvol.a 00:02:00.009 SO libspdk_blobfs.so.9.0 00:02:00.009 SO libspdk_lvol.so.9.1 00:02:00.009 SYMLINK libspdk_blobfs.so 00:02:00.009 LIB libspdk_bdev.a 00:02:00.009 SYMLINK libspdk_lvol.so 00:02:00.009 SO libspdk_bdev.so.14.0 00:02:00.271 SYMLINK libspdk_bdev.so 00:02:00.271 CC lib/ublk/ublk.o 00:02:00.271 CC lib/nbd/nbd.o 00:02:00.271 CC lib/ublk/ublk_rpc.o 00:02:00.271 CC lib/scsi/dev.o 00:02:00.271 CC lib/nvmf/ctrlr.o 00:02:00.271 CC lib/scsi/lun.o 00:02:00.271 CC lib/nbd/nbd_rpc.o 00:02:00.271 CC lib/nvmf/ctrlr_discovery.o 00:02:00.271 CC lib/ftl/ftl_core.o 00:02:00.271 CC lib/scsi/port.o 00:02:00.271 CC lib/nvmf/ctrlr_bdev.o 00:02:00.271 CC lib/ftl/ftl_init.o 00:02:00.271 CC lib/scsi/scsi.o 00:02:00.271 CC lib/scsi/scsi_bdev.o 00:02:00.271 CC lib/ftl/ftl_layout.o 00:02:00.271 CC lib/nvmf/subsystem.o 00:02:00.271 CC lib/nvmf/nvmf.o 00:02:00.271 CC lib/ftl/ftl_debug.o 00:02:00.271 CC lib/scsi/scsi_pr.o 00:02:00.271 CC lib/ftl/ftl_io.o 00:02:00.271 CC lib/scsi/scsi_rpc.o 00:02:00.271 CC lib/nvmf/nvmf_rpc.o 00:02:00.271 CC lib/scsi/task.o 00:02:00.271 CC lib/nvmf/transport.o 00:02:00.271 CC lib/ftl/ftl_sb.o 00:02:00.271 CC lib/nvmf/tcp.o 00:02:00.271 CC lib/ftl/ftl_l2p.o 00:02:00.271 CC lib/ftl/ftl_l2p_flat.o 00:02:00.271 CC lib/nvmf/rdma.o 00:02:00.271 CC lib/ftl/ftl_nv_cache.o 00:02:00.271 CC lib/ftl/ftl_band_ops.o 00:02:00.271 CC lib/ftl/ftl_band.o 00:02:00.271 CC lib/ftl/ftl_writer.o 00:02:00.271 CC lib/ftl/ftl_rq.o 00:02:00.271 CC lib/ftl/ftl_l2p_cache.o 00:02:00.271 CC lib/ftl/ftl_reloc.o 00:02:00.271 CC lib/ftl/ftl_p2l.o 00:02:00.271 CC lib/ftl/mngt/ftl_mngt.o 00:02:00.271 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:00.271 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:00.271 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:00.271 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:00.271 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:00.271 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:00.271 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:00.271 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:00.271 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:00.271 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:00.531 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:00.531 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:00.531 CC lib/ftl/utils/ftl_conf.o 00:02:00.531 CC lib/ftl/utils/ftl_md.o 00:02:00.531 CC lib/ftl/utils/ftl_mempool.o 00:02:00.531 CC lib/ftl/utils/ftl_bitmap.o 00:02:00.792 CC lib/ftl/utils/ftl_property.o 00:02:00.792 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:00.792 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:00.792 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:00.792 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:00.792 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:00.792 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:00.792 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:00.792 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:00.792 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:00.792 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:00.792 CC lib/ftl/base/ftl_base_dev.o 00:02:00.792 CC lib/ftl/base/ftl_base_bdev.o 00:02:00.792 CC lib/ftl/ftl_trace.o 00:02:01.051 LIB libspdk_nbd.a 00:02:01.051 SO libspdk_nbd.so.6.0 00:02:01.051 SYMLINK libspdk_nbd.so 00:02:01.051 LIB libspdk_scsi.a 00:02:01.051 SO libspdk_scsi.so.8.0 00:02:01.309 SYMLINK libspdk_scsi.so 00:02:01.309 LIB libspdk_ublk.a 00:02:01.309 SO libspdk_ublk.so.2.0 00:02:01.309 CC lib/vhost/vhost.o 00:02:01.309 CC lib/iscsi/conn.o 00:02:01.309 CC lib/vhost/vhost_rpc.o 00:02:01.309 CC lib/iscsi/init_grp.o 00:02:01.309 CC lib/vhost/vhost_scsi.o 00:02:01.309 CC lib/iscsi/iscsi.o 00:02:01.309 CC lib/iscsi/md5.o 00:02:01.309 CC lib/vhost/vhost_blk.o 00:02:01.309 CC lib/iscsi/param.o 00:02:01.309 CC lib/vhost/rte_vhost_user.o 00:02:01.309 CC lib/iscsi/portal_grp.o 00:02:01.309 CC lib/iscsi/tgt_node.o 00:02:01.309 CC lib/iscsi/iscsi_subsystem.o 00:02:01.309 CC lib/iscsi/iscsi_rpc.o 00:02:01.309 CC lib/iscsi/task.o 00:02:01.309 SYMLINK libspdk_ublk.so 00:02:01.568 LIB libspdk_ftl.a 00:02:01.826 SO libspdk_ftl.so.8.0 00:02:02.084 SYMLINK libspdk_ftl.so 00:02:02.650 LIB libspdk_vhost.a 00:02:02.650 SO libspdk_vhost.so.7.1 00:02:02.650 SYMLINK libspdk_vhost.so 00:02:02.650 LIB libspdk_nvmf.a 00:02:02.650 SO libspdk_nvmf.so.17.0 00:02:02.908 LIB libspdk_iscsi.a 00:02:02.908 SO libspdk_iscsi.so.7.0 00:02:02.908 SYMLINK libspdk_nvmf.so 00:02:02.908 SYMLINK libspdk_iscsi.so 00:02:03.167 CC module/env_dpdk/env_dpdk_rpc.o 00:02:03.167 CC module/accel/ioat/accel_ioat.o 00:02:03.167 CC module/accel/error/accel_error.o 00:02:03.167 CC module/accel/dsa/accel_dsa.o 00:02:03.167 CC module/accel/error/accel_error_rpc.o 00:02:03.167 CC module/accel/ioat/accel_ioat_rpc.o 00:02:03.167 CC module/accel/dsa/accel_dsa_rpc.o 00:02:03.167 CC module/scheduler/gscheduler/gscheduler.o 00:02:03.167 CC module/sock/posix/posix.o 00:02:03.167 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:03.167 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:03.167 CC module/accel/iaa/accel_iaa.o 00:02:03.167 CC module/accel/iaa/accel_iaa_rpc.o 00:02:03.167 CC module/blob/bdev/blob_bdev.o 00:02:03.425 LIB libspdk_env_dpdk_rpc.a 00:02:03.425 SO libspdk_env_dpdk_rpc.so.5.0 00:02:03.425 LIB libspdk_scheduler_gscheduler.a 00:02:03.425 SYMLINK libspdk_env_dpdk_rpc.so 00:02:03.425 LIB libspdk_scheduler_dpdk_governor.a 00:02:03.425 SO libspdk_scheduler_gscheduler.so.3.0 00:02:03.425 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:03.425 LIB libspdk_accel_error.a 00:02:03.425 LIB libspdk_accel_ioat.a 00:02:03.425 LIB libspdk_scheduler_dynamic.a 00:02:03.425 LIB libspdk_accel_iaa.a 00:02:03.425 SO libspdk_accel_error.so.1.0 00:02:03.425 SO libspdk_accel_ioat.so.5.0 00:02:03.425 SO libspdk_scheduler_dynamic.so.3.0 00:02:03.425 SYMLINK libspdk_scheduler_gscheduler.so 00:02:03.425 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:03.425 SO libspdk_accel_iaa.so.2.0 00:02:03.425 LIB libspdk_accel_dsa.a 00:02:03.425 SYMLINK libspdk_accel_error.so 00:02:03.425 SYMLINK libspdk_accel_ioat.so 00:02:03.425 LIB libspdk_blob_bdev.a 00:02:03.425 SYMLINK libspdk_scheduler_dynamic.so 00:02:03.425 SO libspdk_accel_dsa.so.4.0 00:02:03.425 SYMLINK libspdk_accel_iaa.so 00:02:03.425 SO libspdk_blob_bdev.so.10.1 00:02:03.685 SYMLINK libspdk_accel_dsa.so 00:02:03.685 SYMLINK libspdk_blob_bdev.so 00:02:03.685 CC module/bdev/delay/vbdev_delay.o 00:02:03.685 CC module/bdev/malloc/bdev_malloc.o 00:02:03.685 CC module/bdev/gpt/gpt.o 00:02:03.685 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:03.685 CC module/bdev/gpt/vbdev_gpt.o 00:02:03.685 CC module/bdev/nvme/bdev_nvme.o 00:02:03.685 CC module/bdev/raid/bdev_raid.o 00:02:03.685 CC module/bdev/passthru/vbdev_passthru.o 00:02:03.685 CC module/bdev/split/vbdev_split.o 00:02:03.685 CC module/bdev/raid/bdev_raid_rpc.o 00:02:03.685 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:03.685 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:03.685 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:03.685 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:03.685 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:03.685 CC module/bdev/raid/bdev_raid_sb.o 00:02:03.685 CC module/bdev/split/vbdev_split_rpc.o 00:02:03.685 CC module/bdev/raid/raid0.o 00:02:03.685 CC module/bdev/null/bdev_null.o 00:02:03.685 CC module/bdev/error/vbdev_error.o 00:02:03.685 CC module/bdev/aio/bdev_aio.o 00:02:03.685 CC module/bdev/error/vbdev_error_rpc.o 00:02:03.685 CC module/bdev/iscsi/bdev_iscsi.o 00:02:03.685 CC module/bdev/nvme/nvme_rpc.o 00:02:03.685 CC module/bdev/raid/raid1.o 00:02:03.685 CC module/bdev/null/bdev_null_rpc.o 00:02:03.685 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:03.685 CC module/bdev/nvme/bdev_mdns_client.o 00:02:03.685 CC module/bdev/aio/bdev_aio_rpc.o 00:02:03.685 CC module/bdev/raid/concat.o 00:02:03.685 CC module/bdev/nvme/vbdev_opal.o 00:02:03.685 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:03.685 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:03.685 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:03.685 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:03.685 CC module/bdev/lvol/vbdev_lvol.o 00:02:03.685 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:03.685 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:03.685 CC module/bdev/ftl/bdev_ftl.o 00:02:03.685 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:03.685 CC module/blobfs/bdev/blobfs_bdev.o 00:02:03.685 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:04.252 LIB libspdk_sock_posix.a 00:02:04.252 LIB libspdk_blobfs_bdev.a 00:02:04.252 SO libspdk_sock_posix.so.5.0 00:02:04.252 LIB libspdk_bdev_split.a 00:02:04.252 LIB libspdk_bdev_zone_block.a 00:02:04.252 SO libspdk_blobfs_bdev.so.5.0 00:02:04.252 SO libspdk_bdev_zone_block.so.5.0 00:02:04.252 SO libspdk_bdev_split.so.5.0 00:02:04.252 LIB libspdk_bdev_error.a 00:02:04.252 SYMLINK libspdk_sock_posix.so 00:02:04.252 SYMLINK libspdk_blobfs_bdev.so 00:02:04.252 SO libspdk_bdev_error.so.5.0 00:02:04.252 LIB libspdk_bdev_ftl.a 00:02:04.252 SYMLINK libspdk_bdev_zone_block.so 00:02:04.252 SYMLINK libspdk_bdev_split.so 00:02:04.252 LIB libspdk_bdev_null.a 00:02:04.252 LIB libspdk_bdev_gpt.a 00:02:04.252 SO libspdk_bdev_ftl.so.5.0 00:02:04.252 SO libspdk_bdev_null.so.5.0 00:02:04.252 LIB libspdk_bdev_malloc.a 00:02:04.252 SYMLINK libspdk_bdev_error.so 00:02:04.252 SO libspdk_bdev_gpt.so.5.0 00:02:04.252 LIB libspdk_bdev_passthru.a 00:02:04.252 LIB libspdk_bdev_aio.a 00:02:04.252 SO libspdk_bdev_malloc.so.5.0 00:02:04.252 SO libspdk_bdev_passthru.so.5.0 00:02:04.252 SYMLINK libspdk_bdev_ftl.so 00:02:04.252 SYMLINK libspdk_bdev_null.so 00:02:04.252 SO libspdk_bdev_aio.so.5.0 00:02:04.252 SYMLINK libspdk_bdev_gpt.so 00:02:04.252 LIB libspdk_bdev_iscsi.a 00:02:04.252 LIB libspdk_bdev_delay.a 00:02:04.252 SYMLINK libspdk_bdev_malloc.so 00:02:04.252 SYMLINK libspdk_bdev_passthru.so 00:02:04.252 SO libspdk_bdev_iscsi.so.5.0 00:02:04.252 SO libspdk_bdev_delay.so.5.0 00:02:04.510 SYMLINK libspdk_bdev_aio.so 00:02:04.510 SYMLINK libspdk_bdev_iscsi.so 00:02:04.510 LIB libspdk_bdev_lvol.a 00:02:04.510 SYMLINK libspdk_bdev_delay.so 00:02:04.510 LIB libspdk_bdev_virtio.a 00:02:04.510 SO libspdk_bdev_lvol.so.5.0 00:02:04.510 SO libspdk_bdev_virtio.so.5.0 00:02:04.510 SYMLINK libspdk_bdev_lvol.so 00:02:04.510 SYMLINK libspdk_bdev_virtio.so 00:02:04.768 LIB libspdk_bdev_raid.a 00:02:04.768 SO libspdk_bdev_raid.so.5.0 00:02:05.026 SYMLINK libspdk_bdev_raid.so 00:02:05.961 LIB libspdk_bdev_nvme.a 00:02:05.962 SO libspdk_bdev_nvme.so.6.0 00:02:06.220 SYMLINK libspdk_bdev_nvme.so 00:02:06.220 CC module/event/subsystems/vmd/vmd.o 00:02:06.220 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:06.220 CC module/event/subsystems/iobuf/iobuf.o 00:02:06.220 CC module/event/subsystems/scheduler/scheduler.o 00:02:06.220 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:06.220 CC module/event/subsystems/sock/sock.o 00:02:06.220 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:06.479 LIB libspdk_event_sock.a 00:02:06.479 LIB libspdk_event_scheduler.a 00:02:06.479 LIB libspdk_event_vhost_blk.a 00:02:06.479 LIB libspdk_event_vmd.a 00:02:06.479 LIB libspdk_event_iobuf.a 00:02:06.479 SO libspdk_event_sock.so.4.0 00:02:06.479 SO libspdk_event_scheduler.so.3.0 00:02:06.479 SO libspdk_event_vhost_blk.so.2.0 00:02:06.479 SO libspdk_event_vmd.so.5.0 00:02:06.479 SO libspdk_event_iobuf.so.2.0 00:02:06.479 SYMLINK libspdk_event_sock.so 00:02:06.479 SYMLINK libspdk_event_vhost_blk.so 00:02:06.479 SYMLINK libspdk_event_scheduler.so 00:02:06.479 SYMLINK libspdk_event_vmd.so 00:02:06.479 SYMLINK libspdk_event_iobuf.so 00:02:06.738 CC module/event/subsystems/accel/accel.o 00:02:06.738 LIB libspdk_event_accel.a 00:02:06.998 SO libspdk_event_accel.so.5.0 00:02:06.998 SYMLINK libspdk_event_accel.so 00:02:06.998 CC module/event/subsystems/bdev/bdev.o 00:02:07.257 LIB libspdk_event_bdev.a 00:02:07.257 SO libspdk_event_bdev.so.5.0 00:02:07.257 SYMLINK libspdk_event_bdev.so 00:02:07.515 CC module/event/subsystems/nbd/nbd.o 00:02:07.515 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:07.515 CC module/event/subsystems/ublk/ublk.o 00:02:07.515 CC module/event/subsystems/scsi/scsi.o 00:02:07.515 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:07.515 LIB libspdk_event_ublk.a 00:02:07.515 LIB libspdk_event_nbd.a 00:02:07.515 LIB libspdk_event_scsi.a 00:02:07.515 SO libspdk_event_nbd.so.5.0 00:02:07.515 SO libspdk_event_ublk.so.2.0 00:02:07.515 SO libspdk_event_scsi.so.5.0 00:02:07.515 SYMLINK libspdk_event_nbd.so 00:02:07.515 SYMLINK libspdk_event_ublk.so 00:02:07.515 SYMLINK libspdk_event_scsi.so 00:02:07.515 LIB libspdk_event_nvmf.a 00:02:07.774 SO libspdk_event_nvmf.so.5.0 00:02:07.774 SYMLINK libspdk_event_nvmf.so 00:02:07.774 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:07.774 CC module/event/subsystems/iscsi/iscsi.o 00:02:07.774 LIB libspdk_event_vhost_scsi.a 00:02:07.774 SO libspdk_event_vhost_scsi.so.2.0 00:02:07.774 LIB libspdk_event_iscsi.a 00:02:08.032 SO libspdk_event_iscsi.so.5.0 00:02:08.032 SYMLINK libspdk_event_vhost_scsi.so 00:02:08.032 SYMLINK libspdk_event_iscsi.so 00:02:08.032 SO libspdk.so.5.0 00:02:08.032 SYMLINK libspdk.so 00:02:08.297 CC test/rpc_client/rpc_client_test.o 00:02:08.297 TEST_HEADER include/spdk/accel.h 00:02:08.297 CXX app/trace/trace.o 00:02:08.297 TEST_HEADER include/spdk/accel_module.h 00:02:08.297 CC app/spdk_nvme_identify/identify.o 00:02:08.297 CC app/trace_record/trace_record.o 00:02:08.297 TEST_HEADER include/spdk/assert.h 00:02:08.297 CC app/spdk_top/spdk_top.o 00:02:08.297 CC app/spdk_nvme_perf/perf.o 00:02:08.297 TEST_HEADER include/spdk/barrier.h 00:02:08.297 CC app/spdk_lspci/spdk_lspci.o 00:02:08.297 CC app/spdk_nvme_discover/discovery_aer.o 00:02:08.297 TEST_HEADER include/spdk/base64.h 00:02:08.297 TEST_HEADER include/spdk/bdev.h 00:02:08.297 TEST_HEADER include/spdk/bdev_module.h 00:02:08.297 TEST_HEADER include/spdk/bdev_zone.h 00:02:08.297 TEST_HEADER include/spdk/bit_array.h 00:02:08.297 TEST_HEADER include/spdk/bit_pool.h 00:02:08.297 TEST_HEADER include/spdk/blob_bdev.h 00:02:08.297 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:08.297 TEST_HEADER include/spdk/blobfs.h 00:02:08.297 TEST_HEADER include/spdk/blob.h 00:02:08.297 TEST_HEADER include/spdk/conf.h 00:02:08.297 TEST_HEADER include/spdk/config.h 00:02:08.297 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:08.297 TEST_HEADER include/spdk/cpuset.h 00:02:08.297 TEST_HEADER include/spdk/crc16.h 00:02:08.297 TEST_HEADER include/spdk/crc32.h 00:02:08.297 TEST_HEADER include/spdk/crc64.h 00:02:08.297 TEST_HEADER include/spdk/dif.h 00:02:08.297 TEST_HEADER include/spdk/dma.h 00:02:08.297 TEST_HEADER include/spdk/endian.h 00:02:08.297 CC app/spdk_dd/spdk_dd.o 00:02:08.297 TEST_HEADER include/spdk/env_dpdk.h 00:02:08.297 TEST_HEADER include/spdk/env.h 00:02:08.297 CC app/nvmf_tgt/nvmf_main.o 00:02:08.297 CC app/iscsi_tgt/iscsi_tgt.o 00:02:08.297 CC test/app/jsoncat/jsoncat.o 00:02:08.297 CC test/app/stub/stub.o 00:02:08.297 TEST_HEADER include/spdk/event.h 00:02:08.297 CC test/event/reactor/reactor.o 00:02:08.297 CC examples/util/zipf/zipf.o 00:02:08.297 TEST_HEADER include/spdk/fd_group.h 00:02:08.297 TEST_HEADER include/spdk/fd.h 00:02:08.297 CC app/vhost/vhost.o 00:02:08.297 TEST_HEADER include/spdk/file.h 00:02:08.297 CC examples/ioat/perf/perf.o 00:02:08.297 CC examples/accel/perf/accel_perf.o 00:02:08.297 CC examples/idxd/perf/perf.o 00:02:08.297 CC test/app/histogram_perf/histogram_perf.o 00:02:08.297 TEST_HEADER include/spdk/ftl.h 00:02:08.297 CC test/nvme/reset/reset.o 00:02:08.297 CC examples/ioat/verify/verify.o 00:02:08.297 CC examples/vmd/lsvmd/lsvmd.o 00:02:08.297 CC test/event/reactor_perf/reactor_perf.o 00:02:08.297 TEST_HEADER include/spdk/gpt_spec.h 00:02:08.297 CC examples/sock/hello_world/hello_sock.o 00:02:08.297 CC examples/vmd/led/led.o 00:02:08.297 CC test/event/event_perf/event_perf.o 00:02:08.297 TEST_HEADER include/spdk/hexlify.h 00:02:08.297 TEST_HEADER include/spdk/histogram_data.h 00:02:08.297 CC examples/nvme/hello_world/hello_world.o 00:02:08.297 CC test/thread/poller_perf/poller_perf.o 00:02:08.297 CC test/nvme/aer/aer.o 00:02:08.297 CC app/fio/nvme/fio_plugin.o 00:02:08.297 TEST_HEADER include/spdk/idxd.h 00:02:08.297 CC test/event/app_repeat/app_repeat.o 00:02:08.297 TEST_HEADER include/spdk/idxd_spec.h 00:02:08.297 TEST_HEADER include/spdk/init.h 00:02:08.297 CC app/spdk_tgt/spdk_tgt.o 00:02:08.297 TEST_HEADER include/spdk/ioat.h 00:02:08.297 TEST_HEADER include/spdk/ioat_spec.h 00:02:08.297 TEST_HEADER include/spdk/iscsi_spec.h 00:02:08.297 TEST_HEADER include/spdk/json.h 00:02:08.297 TEST_HEADER include/spdk/jsonrpc.h 00:02:08.297 TEST_HEADER include/spdk/likely.h 00:02:08.297 TEST_HEADER include/spdk/log.h 00:02:08.297 CC test/blobfs/mkfs/mkfs.o 00:02:08.297 TEST_HEADER include/spdk/lvol.h 00:02:08.297 CC test/accel/dif/dif.o 00:02:08.297 CC test/dma/test_dma/test_dma.o 00:02:08.297 TEST_HEADER include/spdk/memory.h 00:02:08.297 CC test/bdev/bdevio/bdevio.o 00:02:08.297 CC examples/thread/thread/thread_ex.o 00:02:08.297 TEST_HEADER include/spdk/mmio.h 00:02:08.297 CC test/app/bdev_svc/bdev_svc.o 00:02:08.297 TEST_HEADER include/spdk/nbd.h 00:02:08.297 CC examples/bdev/hello_world/hello_bdev.o 00:02:08.297 CC examples/nvmf/nvmf/nvmf.o 00:02:08.297 TEST_HEADER include/spdk/notify.h 00:02:08.297 TEST_HEADER include/spdk/nvme.h 00:02:08.297 CC examples/blob/hello_world/hello_blob.o 00:02:08.297 CC examples/bdev/bdevperf/bdevperf.o 00:02:08.297 TEST_HEADER include/spdk/nvme_intel.h 00:02:08.297 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:08.297 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:08.297 CC test/env/mem_callbacks/mem_callbacks.o 00:02:08.297 TEST_HEADER include/spdk/nvme_spec.h 00:02:08.297 TEST_HEADER include/spdk/nvme_zns.h 00:02:08.297 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:08.297 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:08.297 CC test/lvol/esnap/esnap.o 00:02:08.297 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:08.297 TEST_HEADER include/spdk/nvmf.h 00:02:08.297 TEST_HEADER include/spdk/nvmf_spec.h 00:02:08.297 TEST_HEADER include/spdk/nvmf_transport.h 00:02:08.297 TEST_HEADER include/spdk/opal.h 00:02:08.598 TEST_HEADER include/spdk/opal_spec.h 00:02:08.598 TEST_HEADER include/spdk/pci_ids.h 00:02:08.598 TEST_HEADER include/spdk/pipe.h 00:02:08.598 TEST_HEADER include/spdk/queue.h 00:02:08.598 TEST_HEADER include/spdk/reduce.h 00:02:08.598 TEST_HEADER include/spdk/rpc.h 00:02:08.598 TEST_HEADER include/spdk/scheduler.h 00:02:08.598 TEST_HEADER include/spdk/scsi.h 00:02:08.598 TEST_HEADER include/spdk/scsi_spec.h 00:02:08.598 TEST_HEADER include/spdk/sock.h 00:02:08.598 TEST_HEADER include/spdk/stdinc.h 00:02:08.598 TEST_HEADER include/spdk/string.h 00:02:08.598 TEST_HEADER include/spdk/thread.h 00:02:08.598 TEST_HEADER include/spdk/trace.h 00:02:08.598 TEST_HEADER include/spdk/trace_parser.h 00:02:08.598 LINK spdk_lspci 00:02:08.598 TEST_HEADER include/spdk/tree.h 00:02:08.598 TEST_HEADER include/spdk/ublk.h 00:02:08.598 TEST_HEADER include/spdk/util.h 00:02:08.598 TEST_HEADER include/spdk/uuid.h 00:02:08.598 TEST_HEADER include/spdk/version.h 00:02:08.598 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:08.598 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:08.598 TEST_HEADER include/spdk/vhost.h 00:02:08.598 TEST_HEADER include/spdk/vmd.h 00:02:08.598 TEST_HEADER include/spdk/xor.h 00:02:08.598 TEST_HEADER include/spdk/zipf.h 00:02:08.598 CXX test/cpp_headers/accel.o 00:02:08.598 LINK jsoncat 00:02:08.598 LINK rpc_client_test 00:02:08.598 LINK lsvmd 00:02:08.598 LINK reactor 00:02:08.598 LINK reactor_perf 00:02:08.598 LINK led 00:02:08.598 LINK histogram_perf 00:02:08.598 LINK zipf 00:02:08.598 LINK spdk_nvme_discover 00:02:08.598 LINK event_perf 00:02:08.598 LINK interrupt_tgt 00:02:08.598 LINK stub 00:02:08.598 LINK poller_perf 00:02:08.598 LINK app_repeat 00:02:08.598 LINK nvmf_tgt 00:02:08.598 LINK vhost 00:02:08.598 LINK spdk_trace_record 00:02:08.598 LINK iscsi_tgt 00:02:08.874 LINK bdev_svc 00:02:08.874 LINK ioat_perf 00:02:08.874 LINK verify 00:02:08.874 LINK spdk_tgt 00:02:08.874 LINK mkfs 00:02:08.874 LINK hello_world 00:02:08.874 LINK hello_sock 00:02:08.874 LINK reset 00:02:08.874 LINK hello_bdev 00:02:08.874 LINK hello_blob 00:02:08.874 LINK thread 00:02:08.874 LINK aer 00:02:08.874 CXX test/cpp_headers/accel_module.o 00:02:08.874 LINK nvmf 00:02:08.874 LINK idxd_perf 00:02:08.874 CC examples/nvme/reconnect/reconnect.o 00:02:08.874 CC test/nvme/sgl/sgl.o 00:02:08.874 LINK spdk_dd 00:02:08.874 CC test/event/scheduler/scheduler.o 00:02:08.874 CC test/nvme/e2edp/nvme_dp.o 00:02:08.874 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:08.874 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:09.141 CC test/nvme/overhead/overhead.o 00:02:09.141 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:09.141 CXX test/cpp_headers/assert.o 00:02:09.141 LINK spdk_trace 00:02:09.141 CC app/fio/bdev/fio_plugin.o 00:02:09.141 CC examples/nvme/arbitration/arbitration.o 00:02:09.141 CC test/nvme/err_injection/err_injection.o 00:02:09.141 CC examples/nvme/hotplug/hotplug.o 00:02:09.141 LINK dif 00:02:09.141 LINK test_dma 00:02:09.141 CC test/env/vtophys/vtophys.o 00:02:09.141 LINK bdevio 00:02:09.141 CXX test/cpp_headers/barrier.o 00:02:09.141 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:09.141 CC examples/blob/cli/blobcli.o 00:02:09.141 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:09.141 CC test/nvme/startup/startup.o 00:02:09.141 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:09.141 LINK accel_perf 00:02:09.141 CC test/env/memory/memory_ut.o 00:02:09.141 CC test/env/pci/pci_ut.o 00:02:09.141 LINK nvme_fuzz 00:02:09.141 CC test/nvme/reserve/reserve.o 00:02:09.141 CXX test/cpp_headers/base64.o 00:02:09.141 CC test/nvme/simple_copy/simple_copy.o 00:02:09.141 CXX test/cpp_headers/bdev.o 00:02:09.141 CC test/nvme/boot_partition/boot_partition.o 00:02:09.141 CC test/nvme/connect_stress/connect_stress.o 00:02:09.141 CC examples/nvme/abort/abort.o 00:02:09.402 CXX test/cpp_headers/bdev_module.o 00:02:09.402 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:09.402 CC test/nvme/compliance/nvme_compliance.o 00:02:09.402 CC test/nvme/fused_ordering/fused_ordering.o 00:02:09.402 LINK vtophys 00:02:09.402 LINK spdk_nvme 00:02:09.402 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:09.402 CC test/nvme/fdp/fdp.o 00:02:09.402 CXX test/cpp_headers/bdev_zone.o 00:02:09.402 CXX test/cpp_headers/bit_array.o 00:02:09.402 LINK scheduler 00:02:09.402 LINK err_injection 00:02:09.402 LINK env_dpdk_post_init 00:02:09.402 CXX test/cpp_headers/bit_pool.o 00:02:09.402 CC test/nvme/cuse/cuse.o 00:02:09.402 CXX test/cpp_headers/blob_bdev.o 00:02:09.402 CXX test/cpp_headers/blobfs_bdev.o 00:02:09.402 LINK sgl 00:02:09.402 LINK startup 00:02:09.402 LINK nvme_dp 00:02:09.402 CXX test/cpp_headers/blobfs.o 00:02:09.402 CXX test/cpp_headers/blob.o 00:02:09.402 CXX test/cpp_headers/conf.o 00:02:09.402 LINK cmb_copy 00:02:09.667 LINK hotplug 00:02:09.667 CXX test/cpp_headers/config.o 00:02:09.667 LINK overhead 00:02:09.667 CXX test/cpp_headers/cpuset.o 00:02:09.667 LINK mem_callbacks 00:02:09.667 LINK reserve 00:02:09.667 CXX test/cpp_headers/crc16.o 00:02:09.667 LINK boot_partition 00:02:09.667 LINK reconnect 00:02:09.667 LINK connect_stress 00:02:09.667 CXX test/cpp_headers/crc32.o 00:02:09.667 CXX test/cpp_headers/crc64.o 00:02:09.667 LINK arbitration 00:02:09.667 LINK simple_copy 00:02:09.667 LINK spdk_nvme_perf 00:02:09.667 LINK pmr_persistence 00:02:09.667 CXX test/cpp_headers/dif.o 00:02:09.667 LINK doorbell_aers 00:02:09.667 CXX test/cpp_headers/dma.o 00:02:09.667 LINK fused_ordering 00:02:09.667 CXX test/cpp_headers/endian.o 00:02:09.667 CXX test/cpp_headers/env_dpdk.o 00:02:09.667 LINK bdevperf 00:02:09.667 LINK spdk_nvme_identify 00:02:09.930 CXX test/cpp_headers/env.o 00:02:09.930 CXX test/cpp_headers/event.o 00:02:09.930 CXX test/cpp_headers/fd_group.o 00:02:09.930 CXX test/cpp_headers/fd.o 00:02:09.930 CXX test/cpp_headers/file.o 00:02:09.930 CXX test/cpp_headers/ftl.o 00:02:09.930 CXX test/cpp_headers/gpt_spec.o 00:02:09.930 CXX test/cpp_headers/hexlify.o 00:02:09.930 CXX test/cpp_headers/histogram_data.o 00:02:09.930 CXX test/cpp_headers/idxd.o 00:02:09.930 LINK spdk_top 00:02:09.930 CXX test/cpp_headers/idxd_spec.o 00:02:09.930 CXX test/cpp_headers/init.o 00:02:09.930 LINK vhost_fuzz 00:02:09.930 CXX test/cpp_headers/ioat.o 00:02:09.930 CXX test/cpp_headers/ioat_spec.o 00:02:09.930 CXX test/cpp_headers/iscsi_spec.o 00:02:09.930 CXX test/cpp_headers/json.o 00:02:09.930 LINK pci_ut 00:02:09.930 LINK nvme_manage 00:02:09.930 CXX test/cpp_headers/jsonrpc.o 00:02:09.930 CXX test/cpp_headers/likely.o 00:02:09.930 CXX test/cpp_headers/log.o 00:02:09.930 CXX test/cpp_headers/lvol.o 00:02:09.930 CXX test/cpp_headers/memory.o 00:02:09.930 LINK nvme_compliance 00:02:09.930 LINK abort 00:02:09.930 CXX test/cpp_headers/mmio.o 00:02:09.930 CXX test/cpp_headers/nbd.o 00:02:09.930 CXX test/cpp_headers/notify.o 00:02:09.930 CXX test/cpp_headers/nvme.o 00:02:09.930 LINK fdp 00:02:09.930 CXX test/cpp_headers/nvme_intel.o 00:02:09.930 LINK spdk_bdev 00:02:09.930 CXX test/cpp_headers/nvme_ocssd.o 00:02:09.930 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:10.192 CXX test/cpp_headers/nvme_spec.o 00:02:10.192 CXX test/cpp_headers/nvmf_cmd.o 00:02:10.192 CXX test/cpp_headers/nvme_zns.o 00:02:10.192 LINK blobcli 00:02:10.192 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:10.192 CXX test/cpp_headers/nvmf.o 00:02:10.192 CXX test/cpp_headers/nvmf_spec.o 00:02:10.192 CXX test/cpp_headers/nvmf_transport.o 00:02:10.192 CXX test/cpp_headers/opal.o 00:02:10.192 CXX test/cpp_headers/opal_spec.o 00:02:10.192 CXX test/cpp_headers/pci_ids.o 00:02:10.192 CXX test/cpp_headers/pipe.o 00:02:10.192 CXX test/cpp_headers/queue.o 00:02:10.192 CXX test/cpp_headers/reduce.o 00:02:10.192 CXX test/cpp_headers/rpc.o 00:02:10.192 CXX test/cpp_headers/scheduler.o 00:02:10.192 CXX test/cpp_headers/scsi.o 00:02:10.192 CXX test/cpp_headers/scsi_spec.o 00:02:10.192 CXX test/cpp_headers/sock.o 00:02:10.192 CXX test/cpp_headers/stdinc.o 00:02:10.192 CXX test/cpp_headers/string.o 00:02:10.192 CXX test/cpp_headers/thread.o 00:02:10.192 CXX test/cpp_headers/trace.o 00:02:10.192 CXX test/cpp_headers/trace_parser.o 00:02:10.192 CXX test/cpp_headers/tree.o 00:02:10.192 CXX test/cpp_headers/ublk.o 00:02:10.192 CXX test/cpp_headers/util.o 00:02:10.192 CXX test/cpp_headers/uuid.o 00:02:10.192 CXX test/cpp_headers/version.o 00:02:10.192 CXX test/cpp_headers/vfio_user_pci.o 00:02:10.192 CXX test/cpp_headers/vfio_user_spec.o 00:02:10.192 CXX test/cpp_headers/vhost.o 00:02:10.192 CXX test/cpp_headers/vmd.o 00:02:10.192 CXX test/cpp_headers/xor.o 00:02:10.192 CXX test/cpp_headers/zipf.o 00:02:10.760 LINK memory_ut 00:02:11.017 LINK cuse 00:02:11.275 LINK iscsi_fuzz 00:02:13.801 LINK esnap 00:02:14.059 00:02:14.059 real 0m45.374s 00:02:14.059 user 9m36.304s 00:02:14.059 sys 2m8.233s 00:02:14.059 06:42:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:14.059 06:42:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.059 ************************************ 00:02:14.059 END TEST make 00:02:14.059 ************************************ 00:02:14.060 06:42:21 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:14.060 06:42:21 -- nvmf/common.sh@7 -- # uname -s 00:02:14.060 06:42:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:14.060 06:42:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:14.060 06:42:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:14.060 06:42:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:14.060 06:42:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:14.060 06:42:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:14.060 06:42:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:14.060 06:42:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:14.060 06:42:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:14.060 06:42:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:14.060 06:42:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:14.060 06:42:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:14.060 06:42:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:14.060 06:42:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:14.060 06:42:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:14.060 06:42:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:14.060 06:42:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:14.060 06:42:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:14.060 06:42:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:14.060 06:42:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.060 06:42:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.060 06:42:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.060 06:42:21 -- paths/export.sh@5 -- # export PATH 00:02:14.060 06:42:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.060 06:42:21 -- nvmf/common.sh@46 -- # : 0 00:02:14.060 06:42:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:14.060 06:42:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:14.060 06:42:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:14.060 06:42:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:14.060 06:42:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:14.060 06:42:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:14.060 06:42:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:14.060 06:42:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:14.318 06:42:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:14.318 06:42:21 -- spdk/autotest.sh@32 -- # uname -s 00:02:14.318 06:42:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:14.318 06:42:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:14.318 06:42:21 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:14.318 06:42:21 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:14.318 06:42:21 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:14.318 06:42:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:14.318 06:42:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:14.318 06:42:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:14.318 06:42:21 -- spdk/autotest.sh@48 -- # udevadm_pid=2876871 00:02:14.318 06:42:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:14.318 06:42:21 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:14.318 06:42:21 -- spdk/autotest.sh@54 -- # echo 2876873 00:02:14.318 06:42:21 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:14.318 06:42:21 -- spdk/autotest.sh@56 -- # echo 2876874 00:02:14.318 06:42:21 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:14.318 06:42:21 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:14.318 06:42:21 -- spdk/autotest.sh@60 -- # echo 2876875 00:02:14.318 06:42:21 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:14.318 06:42:21 -- spdk/autotest.sh@62 -- # echo 2876876 00:02:14.318 06:42:21 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:14.318 06:42:21 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:14.319 06:42:21 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:14.319 06:42:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:14.319 06:42:21 -- common/autotest_common.sh@10 -- # set +x 00:02:14.319 06:42:21 -- spdk/autotest.sh@70 -- # create_test_list 00:02:14.319 06:42:21 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:14.319 06:42:21 -- common/autotest_common.sh@10 -- # set +x 00:02:14.319 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:14.319 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:14.319 06:42:21 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:14.319 06:42:21 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.319 06:42:21 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.319 06:42:21 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:14.319 06:42:21 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.319 06:42:21 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:14.319 06:42:21 -- common/autotest_common.sh@1440 -- # uname 00:02:14.319 06:42:21 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:14.319 06:42:21 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:14.319 06:42:21 -- common/autotest_common.sh@1460 -- # uname 00:02:14.319 06:42:21 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:14.319 06:42:21 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:14.319 06:42:21 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:14.319 06:42:21 -- spdk/autotest.sh@83 -- # hash lcov 00:02:14.319 06:42:21 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:14.319 06:42:21 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:14.319 --rc lcov_branch_coverage=1 00:02:14.319 --rc lcov_function_coverage=1 00:02:14.319 --rc genhtml_branch_coverage=1 00:02:14.319 --rc genhtml_function_coverage=1 00:02:14.319 --rc genhtml_legend=1 00:02:14.319 --rc geninfo_all_blocks=1 00:02:14.319 ' 00:02:14.319 06:42:21 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:14.319 --rc lcov_branch_coverage=1 00:02:14.319 --rc lcov_function_coverage=1 00:02:14.319 --rc genhtml_branch_coverage=1 00:02:14.319 --rc genhtml_function_coverage=1 00:02:14.319 --rc genhtml_legend=1 00:02:14.319 --rc geninfo_all_blocks=1 00:02:14.319 ' 00:02:14.319 06:42:21 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:14.319 --rc lcov_branch_coverage=1 00:02:14.319 --rc lcov_function_coverage=1 00:02:14.319 --rc genhtml_branch_coverage=1 00:02:14.319 --rc genhtml_function_coverage=1 00:02:14.319 --rc genhtml_legend=1 00:02:14.319 --rc geninfo_all_blocks=1 00:02:14.319 --no-external' 00:02:14.319 06:42:21 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:14.319 --rc lcov_branch_coverage=1 00:02:14.319 --rc lcov_function_coverage=1 00:02:14.319 --rc genhtml_branch_coverage=1 00:02:14.319 --rc genhtml_function_coverage=1 00:02:14.319 --rc genhtml_legend=1 00:02:14.319 --rc geninfo_all_blocks=1 00:02:14.319 --no-external' 00:02:14.319 06:42:21 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:14.319 lcov: LCOV version 1.14 00:02:14.319 06:42:21 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:29.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:29.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:29.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:29.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:29.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:29.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:44.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:44.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:44.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:44.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:44.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:44.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:44.658 06:42:51 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:44.658 06:42:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:44.658 06:42:51 -- common/autotest_common.sh@10 -- # set +x 00:02:44.658 06:42:51 -- spdk/autotest.sh@102 -- # rm -f 00:02:44.658 06:42:51 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:46.035 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:46.035 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:46.035 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:46.035 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:46.035 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:46.035 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:46.035 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:46.035 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:46.035 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:46.035 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:46.035 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:46.035 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:46.035 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:46.035 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:46.035 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:46.035 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:46.035 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:46.035 06:42:53 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:46.035 06:42:53 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:46.035 06:42:53 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:46.035 06:42:53 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:46.036 06:42:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:46.036 06:42:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:46.036 06:42:53 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:46.036 06:42:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:46.036 06:42:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:46.036 06:42:53 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:46.293 06:42:53 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:46.293 06:42:53 -- spdk/autotest.sh@121 -- # grep -v p 00:02:46.293 06:42:53 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:46.293 06:42:53 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:46.293 06:42:53 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:46.293 06:42:53 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:46.293 06:42:53 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:46.293 No valid GPT data, bailing 00:02:46.293 06:42:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:46.293 06:42:53 -- scripts/common.sh@393 -- # pt= 00:02:46.293 06:42:53 -- scripts/common.sh@394 -- # return 1 00:02:46.293 06:42:53 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:46.293 1+0 records in 00:02:46.293 1+0 records out 00:02:46.293 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00246814 s, 425 MB/s 00:02:46.293 06:42:53 -- spdk/autotest.sh@129 -- # sync 00:02:46.293 06:42:53 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:46.293 06:42:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:46.293 06:42:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:48.192 06:42:55 -- spdk/autotest.sh@135 -- # uname -s 00:02:48.192 06:42:55 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:48.192 06:42:55 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:48.192 06:42:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:48.192 06:42:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:48.192 06:42:55 -- common/autotest_common.sh@10 -- # set +x 00:02:48.192 ************************************ 00:02:48.192 START TEST setup.sh 00:02:48.192 ************************************ 00:02:48.192 06:42:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:48.192 * Looking for test storage... 00:02:48.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:48.192 06:42:55 -- setup/test-setup.sh@10 -- # uname -s 00:02:48.192 06:42:55 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:48.192 06:42:55 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:48.192 06:42:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:48.192 06:42:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:48.192 06:42:55 -- common/autotest_common.sh@10 -- # set +x 00:02:48.192 ************************************ 00:02:48.192 START TEST acl 00:02:48.192 ************************************ 00:02:48.192 06:42:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:48.450 * Looking for test storage... 00:02:48.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:48.450 06:42:55 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:48.450 06:42:55 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:48.450 06:42:55 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:48.450 06:42:55 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:48.450 06:42:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:48.450 06:42:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:48.450 06:42:55 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:48.450 06:42:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:48.450 06:42:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:48.451 06:42:55 -- setup/acl.sh@12 -- # devs=() 00:02:48.451 06:42:55 -- setup/acl.sh@12 -- # declare -a devs 00:02:48.451 06:42:55 -- setup/acl.sh@13 -- # drivers=() 00:02:48.451 06:42:55 -- setup/acl.sh@13 -- # declare -A drivers 00:02:48.451 06:42:55 -- setup/acl.sh@51 -- # setup reset 00:02:48.451 06:42:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:48.451 06:42:55 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.822 06:42:56 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:49.822 06:42:56 -- setup/acl.sh@16 -- # local dev driver 00:02:49.822 06:42:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.822 06:42:56 -- setup/acl.sh@15 -- # setup output status 00:02:49.822 06:42:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.822 06:42:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:50.756 Hugepages 00:02:50.756 node hugesize free / total 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 00:02:50.756 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # continue 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:50.756 06:42:57 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:50.756 06:42:57 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:50.756 06:42:57 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:50.756 06:42:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.756 06:42:57 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:50.756 06:42:57 -- setup/acl.sh@54 -- # run_test denied denied 00:02:50.756 06:42:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:50.756 06:42:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:50.756 06:42:57 -- common/autotest_common.sh@10 -- # set +x 00:02:50.756 ************************************ 00:02:50.756 START TEST denied 00:02:50.756 ************************************ 00:02:50.756 06:42:57 -- common/autotest_common.sh@1104 -- # denied 00:02:50.756 06:42:57 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:50.756 06:42:57 -- setup/acl.sh@38 -- # setup output config 00:02:50.756 06:42:57 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:50.756 06:42:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.756 06:42:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:52.128 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:52.128 06:42:59 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:52.128 06:42:59 -- setup/acl.sh@28 -- # local dev driver 00:02:52.128 06:42:59 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:52.128 06:42:59 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:52.128 06:42:59 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:52.128 06:42:59 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:52.128 06:42:59 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:52.128 06:42:59 -- setup/acl.sh@41 -- # setup reset 00:02:52.128 06:42:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:52.128 06:42:59 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.656 00:02:54.656 real 0m3.633s 00:02:54.656 user 0m1.071s 00:02:54.656 sys 0m1.730s 00:02:54.656 06:43:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:54.656 06:43:01 -- common/autotest_common.sh@10 -- # set +x 00:02:54.656 ************************************ 00:02:54.656 END TEST denied 00:02:54.656 ************************************ 00:02:54.656 06:43:01 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:54.656 06:43:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:54.656 06:43:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:54.656 06:43:01 -- common/autotest_common.sh@10 -- # set +x 00:02:54.656 ************************************ 00:02:54.656 START TEST allowed 00:02:54.656 ************************************ 00:02:54.656 06:43:01 -- common/autotest_common.sh@1104 -- # allowed 00:02:54.656 06:43:01 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:54.656 06:43:01 -- setup/acl.sh@45 -- # setup output config 00:02:54.656 06:43:01 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:54.656 06:43:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.656 06:43:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.190 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:57.190 06:43:03 -- setup/acl.sh@47 -- # verify 00:02:57.190 06:43:03 -- setup/acl.sh@28 -- # local dev driver 00:02:57.190 06:43:03 -- setup/acl.sh@48 -- # setup reset 00:02:57.190 06:43:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.190 06:43:03 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.567 00:02:58.567 real 0m3.795s 00:02:58.567 user 0m1.069s 00:02:58.567 sys 0m1.654s 00:02:58.567 06:43:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.567 06:43:05 -- common/autotest_common.sh@10 -- # set +x 00:02:58.567 ************************************ 00:02:58.567 END TEST allowed 00:02:58.567 ************************************ 00:02:58.567 00:02:58.567 real 0m9.979s 00:02:58.567 user 0m3.172s 00:02:58.567 sys 0m4.982s 00:02:58.567 06:43:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.567 06:43:05 -- common/autotest_common.sh@10 -- # set +x 00:02:58.567 ************************************ 00:02:58.567 END TEST acl 00:02:58.567 ************************************ 00:02:58.567 06:43:05 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:58.567 06:43:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:58.567 06:43:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:58.567 06:43:05 -- common/autotest_common.sh@10 -- # set +x 00:02:58.567 ************************************ 00:02:58.567 START TEST hugepages 00:02:58.567 ************************************ 00:02:58.567 06:43:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:58.567 * Looking for test storage... 00:02:58.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:58.567 06:43:05 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:58.567 06:43:05 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:58.567 06:43:05 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:58.567 06:43:05 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:58.567 06:43:05 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:58.567 06:43:05 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:58.567 06:43:05 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:58.567 06:43:05 -- setup/common.sh@18 -- # local node= 00:02:58.567 06:43:05 -- setup/common.sh@19 -- # local var val 00:02:58.567 06:43:05 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.567 06:43:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.567 06:43:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.567 06:43:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.567 06:43:05 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.567 06:43:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.567 06:43:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 39558440 kB' 'MemAvailable: 44206528 kB' 'Buffers: 2696 kB' 'Cached: 14341192 kB' 'SwapCached: 0 kB' 'Active: 10367464 kB' 'Inactive: 4457556 kB' 'Active(anon): 9801632 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484352 kB' 'Mapped: 233040 kB' 'Shmem: 9320500 kB' 'KReclaimable: 236648 kB' 'Slab: 621408 kB' 'SReclaimable: 236648 kB' 'SUnreclaim: 384760 kB' 'KernelStack: 12960 kB' 'PageTables: 9260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 10956836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.567 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.567 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.568 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.569 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.569 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.569 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.569 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.569 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # continue 00:02:58.569 06:43:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 06:43:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 06:43:05 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.569 06:43:05 -- setup/common.sh@33 -- # echo 2048 00:02:58.569 06:43:05 -- setup/common.sh@33 -- # return 0 00:02:58.569 06:43:05 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:58.569 06:43:05 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:58.569 06:43:05 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:58.569 06:43:05 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:58.569 06:43:05 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:58.569 06:43:05 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:58.569 06:43:05 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:58.569 06:43:05 -- setup/hugepages.sh@207 -- # get_nodes 00:02:58.569 06:43:05 -- setup/hugepages.sh@27 -- # local node 00:02:58.569 06:43:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.569 06:43:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:58.569 06:43:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.569 06:43:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:58.569 06:43:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:58.569 06:43:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:58.569 06:43:05 -- setup/hugepages.sh@208 -- # clear_hp 00:02:58.569 06:43:05 -- setup/hugepages.sh@37 -- # local node hp 00:02:58.569 06:43:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:58.569 06:43:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.569 06:43:05 -- setup/hugepages.sh@41 -- # echo 0 00:02:58.569 06:43:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.569 06:43:05 -- setup/hugepages.sh@41 -- # echo 0 00:02:58.569 06:43:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:58.569 06:43:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.569 06:43:05 -- setup/hugepages.sh@41 -- # echo 0 00:02:58.569 06:43:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.569 06:43:05 -- setup/hugepages.sh@41 -- # echo 0 00:02:58.569 06:43:05 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:58.569 06:43:05 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:58.569 06:43:05 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:58.569 06:43:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:58.569 06:43:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:58.569 06:43:05 -- common/autotest_common.sh@10 -- # set +x 00:02:58.569 ************************************ 00:02:58.569 START TEST default_setup 00:02:58.569 ************************************ 00:02:58.569 06:43:05 -- common/autotest_common.sh@1104 -- # default_setup 00:02:58.569 06:43:05 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:58.569 06:43:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:58.569 06:43:05 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:58.569 06:43:05 -- setup/hugepages.sh@51 -- # shift 00:02:58.569 06:43:05 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:58.569 06:43:05 -- setup/hugepages.sh@52 -- # local node_ids 00:02:58.569 06:43:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:58.569 06:43:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:58.569 06:43:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:58.569 06:43:05 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:58.569 06:43:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:58.569 06:43:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:58.569 06:43:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:58.569 06:43:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:58.569 06:43:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:58.569 06:43:05 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:58.569 06:43:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:58.569 06:43:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:58.569 06:43:05 -- setup/hugepages.sh@73 -- # return 0 00:02:58.569 06:43:05 -- setup/hugepages.sh@137 -- # setup output 00:02:58.569 06:43:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.569 06:43:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:59.507 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:59.507 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:59.507 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:59.507 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:59.507 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:59.507 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:59.507 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:59.507 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:59.507 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:59.507 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:59.507 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:59.507 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:59.507 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:59.507 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:59.507 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:59.507 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:00.884 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:00.884 06:43:07 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:00.884 06:43:07 -- setup/hugepages.sh@89 -- # local node 00:03:00.884 06:43:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:00.884 06:43:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:00.884 06:43:07 -- setup/hugepages.sh@92 -- # local surp 00:03:00.884 06:43:07 -- setup/hugepages.sh@93 -- # local resv 00:03:00.884 06:43:07 -- setup/hugepages.sh@94 -- # local anon 00:03:00.884 06:43:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:00.884 06:43:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:00.884 06:43:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:00.884 06:43:07 -- setup/common.sh@18 -- # local node= 00:03:00.884 06:43:07 -- setup/common.sh@19 -- # local var val 00:03:00.884 06:43:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.884 06:43:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.884 06:43:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.884 06:43:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.884 06:43:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.884 06:43:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.884 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.884 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.884 06:43:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41659268 kB' 'MemAvailable: 46307340 kB' 'Buffers: 2696 kB' 'Cached: 14341284 kB' 'SwapCached: 0 kB' 'Active: 10386384 kB' 'Inactive: 4457556 kB' 'Active(anon): 9820552 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503256 kB' 'Mapped: 233044 kB' 'Shmem: 9320592 kB' 'KReclaimable: 236616 kB' 'Slab: 621068 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384452 kB' 'KernelStack: 13024 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10979364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197276 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:00.884 06:43:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.884 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.884 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.884 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.884 06:43:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.884 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.884 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.884 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.884 06:43:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.884 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.884 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.884 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.885 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.885 06:43:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.886 06:43:07 -- setup/common.sh@33 -- # echo 0 00:03:00.886 06:43:07 -- setup/common.sh@33 -- # return 0 00:03:00.886 06:43:07 -- setup/hugepages.sh@97 -- # anon=0 00:03:00.886 06:43:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:00.886 06:43:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.886 06:43:07 -- setup/common.sh@18 -- # local node= 00:03:00.886 06:43:07 -- setup/common.sh@19 -- # local var val 00:03:00.886 06:43:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.886 06:43:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.886 06:43:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.886 06:43:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.886 06:43:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.886 06:43:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41660444 kB' 'MemAvailable: 46308516 kB' 'Buffers: 2696 kB' 'Cached: 14341284 kB' 'SwapCached: 0 kB' 'Active: 10386584 kB' 'Inactive: 4457556 kB' 'Active(anon): 9820752 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503528 kB' 'Mapped: 233176 kB' 'Shmem: 9320592 kB' 'KReclaimable: 236616 kB' 'Slab: 621316 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384700 kB' 'KernelStack: 13008 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10979376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.886 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.886 06:43:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.887 06:43:07 -- setup/common.sh@33 -- # echo 0 00:03:00.887 06:43:07 -- setup/common.sh@33 -- # return 0 00:03:00.887 06:43:07 -- setup/hugepages.sh@99 -- # surp=0 00:03:00.887 06:43:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:00.887 06:43:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:00.887 06:43:07 -- setup/common.sh@18 -- # local node= 00:03:00.887 06:43:07 -- setup/common.sh@19 -- # local var val 00:03:00.887 06:43:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.887 06:43:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.887 06:43:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.887 06:43:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.887 06:43:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.887 06:43:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41660916 kB' 'MemAvailable: 46308988 kB' 'Buffers: 2696 kB' 'Cached: 14341292 kB' 'SwapCached: 0 kB' 'Active: 10385780 kB' 'Inactive: 4457556 kB' 'Active(anon): 9819948 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502684 kB' 'Mapped: 233148 kB' 'Shmem: 9320600 kB' 'KReclaimable: 236616 kB' 'Slab: 621316 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384700 kB' 'KernelStack: 12976 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10979392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.887 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.887 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.888 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.888 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.889 06:43:07 -- setup/common.sh@33 -- # echo 0 00:03:00.889 06:43:07 -- setup/common.sh@33 -- # return 0 00:03:00.889 06:43:07 -- setup/hugepages.sh@100 -- # resv=0 00:03:00.889 06:43:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:00.889 nr_hugepages=1024 00:03:00.889 06:43:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:00.889 resv_hugepages=0 00:03:00.889 06:43:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:00.889 surplus_hugepages=0 00:03:00.889 06:43:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:00.889 anon_hugepages=0 00:03:00.889 06:43:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:00.889 06:43:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:00.889 06:43:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:00.889 06:43:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:00.889 06:43:07 -- setup/common.sh@18 -- # local node= 00:03:00.889 06:43:07 -- setup/common.sh@19 -- # local var val 00:03:00.889 06:43:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.889 06:43:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.889 06:43:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.889 06:43:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.889 06:43:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.889 06:43:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41661504 kB' 'MemAvailable: 46309576 kB' 'Buffers: 2696 kB' 'Cached: 14341312 kB' 'SwapCached: 0 kB' 'Active: 10385968 kB' 'Inactive: 4457556 kB' 'Active(anon): 9820136 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502768 kB' 'Mapped: 233072 kB' 'Shmem: 9320620 kB' 'KReclaimable: 236616 kB' 'Slab: 621292 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384676 kB' 'KernelStack: 12960 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10979408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.889 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.889 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.890 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.890 06:43:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.890 06:43:07 -- setup/common.sh@33 -- # echo 1024 00:03:00.890 06:43:07 -- setup/common.sh@33 -- # return 0 00:03:00.890 06:43:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:00.890 06:43:07 -- setup/hugepages.sh@112 -- # get_nodes 00:03:00.890 06:43:07 -- setup/hugepages.sh@27 -- # local node 00:03:00.890 06:43:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.890 06:43:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:00.890 06:43:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.891 06:43:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:00.891 06:43:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.891 06:43:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.891 06:43:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.891 06:43:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.891 06:43:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:00.891 06:43:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.891 06:43:07 -- setup/common.sh@18 -- # local node=0 00:03:00.891 06:43:07 -- setup/common.sh@19 -- # local var val 00:03:00.891 06:43:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.891 06:43:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.891 06:43:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:00.891 06:43:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:00.891 06:43:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.891 06:43:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20490028 kB' 'MemUsed: 12339856 kB' 'SwapCached: 0 kB' 'Active: 5913452 kB' 'Inactive: 3352072 kB' 'Active(anon): 5760904 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3352072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9127908 kB' 'Mapped: 101804 kB' 'AnonPages: 140788 kB' 'Shmem: 5623288 kB' 'KernelStack: 6040 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119252 kB' 'Slab: 335616 kB' 'SReclaimable: 119252 kB' 'SUnreclaim: 216364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.891 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.891 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.892 06:43:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.892 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.892 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.892 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.892 06:43:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.892 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.892 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.892 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.892 06:43:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.892 06:43:07 -- setup/common.sh@32 -- # continue 00:03:00.892 06:43:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.892 06:43:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.892 06:43:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.892 06:43:07 -- setup/common.sh@33 -- # echo 0 00:03:00.892 06:43:07 -- setup/common.sh@33 -- # return 0 00:03:00.892 06:43:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.892 06:43:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.892 06:43:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.892 06:43:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.892 06:43:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:00.892 node0=1024 expecting 1024 00:03:00.892 06:43:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:00.892 00:03:00.892 real 0m2.501s 00:03:00.892 user 0m0.643s 00:03:00.892 sys 0m0.798s 00:03:00.892 06:43:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.892 06:43:07 -- common/autotest_common.sh@10 -- # set +x 00:03:00.892 ************************************ 00:03:00.892 END TEST default_setup 00:03:00.892 ************************************ 00:03:00.892 06:43:07 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:00.892 06:43:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:00.892 06:43:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:00.892 06:43:07 -- common/autotest_common.sh@10 -- # set +x 00:03:00.892 ************************************ 00:03:00.892 START TEST per_node_1G_alloc 00:03:00.892 ************************************ 00:03:00.892 06:43:07 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:00.892 06:43:07 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:00.892 06:43:07 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:00.892 06:43:07 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:00.892 06:43:07 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:00.892 06:43:07 -- setup/hugepages.sh@51 -- # shift 00:03:00.892 06:43:07 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:00.892 06:43:07 -- setup/hugepages.sh@52 -- # local node_ids 00:03:00.892 06:43:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.892 06:43:07 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:00.892 06:43:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:00.892 06:43:07 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:00.892 06:43:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.892 06:43:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:00.892 06:43:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.892 06:43:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.892 06:43:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.892 06:43:07 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:00.892 06:43:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:00.892 06:43:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:00.892 06:43:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:00.892 06:43:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:00.892 06:43:07 -- setup/hugepages.sh@73 -- # return 0 00:03:00.892 06:43:07 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:00.892 06:43:07 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:00.892 06:43:07 -- setup/hugepages.sh@146 -- # setup output 00:03:00.892 06:43:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.892 06:43:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.302 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:02.302 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:02.302 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:02.302 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:02.302 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:02.302 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:02.303 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:02.303 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:02.303 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:02.303 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:02.303 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:02.303 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:02.303 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:02.303 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:02.303 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:02.303 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:02.303 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:02.303 06:43:09 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:02.303 06:43:09 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:02.303 06:43:09 -- setup/hugepages.sh@89 -- # local node 00:03:02.303 06:43:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:02.303 06:43:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:02.303 06:43:09 -- setup/hugepages.sh@92 -- # local surp 00:03:02.303 06:43:09 -- setup/hugepages.sh@93 -- # local resv 00:03:02.303 06:43:09 -- setup/hugepages.sh@94 -- # local anon 00:03:02.303 06:43:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:02.303 06:43:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:02.303 06:43:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:02.303 06:43:09 -- setup/common.sh@18 -- # local node= 00:03:02.303 06:43:09 -- setup/common.sh@19 -- # local var val 00:03:02.303 06:43:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.303 06:43:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.303 06:43:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.303 06:43:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.303 06:43:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.303 06:43:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41652332 kB' 'MemAvailable: 46300404 kB' 'Buffers: 2696 kB' 'Cached: 14341368 kB' 'SwapCached: 0 kB' 'Active: 10390260 kB' 'Inactive: 4457556 kB' 'Active(anon): 9824428 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506904 kB' 'Mapped: 233532 kB' 'Shmem: 9320676 kB' 'KReclaimable: 236616 kB' 'Slab: 621140 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384524 kB' 'KernelStack: 13040 kB' 'PageTables: 9384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10983708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197372 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.303 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.303 06:43:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.304 06:43:09 -- setup/common.sh@33 -- # echo 0 00:03:02.304 06:43:09 -- setup/common.sh@33 -- # return 0 00:03:02.304 06:43:09 -- setup/hugepages.sh@97 -- # anon=0 00:03:02.304 06:43:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:02.304 06:43:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.304 06:43:09 -- setup/common.sh@18 -- # local node= 00:03:02.304 06:43:09 -- setup/common.sh@19 -- # local var val 00:03:02.304 06:43:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.304 06:43:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.304 06:43:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.304 06:43:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.304 06:43:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.304 06:43:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41656272 kB' 'MemAvailable: 46304344 kB' 'Buffers: 2696 kB' 'Cached: 14341368 kB' 'SwapCached: 0 kB' 'Active: 10392600 kB' 'Inactive: 4457556 kB' 'Active(anon): 9826768 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509324 kB' 'Mapped: 233876 kB' 'Shmem: 9320676 kB' 'KReclaimable: 236616 kB' 'Slab: 621124 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384508 kB' 'KernelStack: 13024 kB' 'PageTables: 9296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10985712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197328 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.304 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.304 06:43:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.305 06:43:09 -- setup/common.sh@33 -- # echo 0 00:03:02.305 06:43:09 -- setup/common.sh@33 -- # return 0 00:03:02.305 06:43:09 -- setup/hugepages.sh@99 -- # surp=0 00:03:02.305 06:43:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:02.305 06:43:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:02.305 06:43:09 -- setup/common.sh@18 -- # local node= 00:03:02.305 06:43:09 -- setup/common.sh@19 -- # local var val 00:03:02.305 06:43:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.305 06:43:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.305 06:43:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.305 06:43:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.305 06:43:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.305 06:43:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41656020 kB' 'MemAvailable: 46304092 kB' 'Buffers: 2696 kB' 'Cached: 14341380 kB' 'SwapCached: 0 kB' 'Active: 10386296 kB' 'Inactive: 4457556 kB' 'Active(anon): 9820464 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502948 kB' 'Mapped: 233424 kB' 'Shmem: 9320688 kB' 'KReclaimable: 236616 kB' 'Slab: 621140 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384524 kB' 'KernelStack: 13040 kB' 'PageTables: 9332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10979604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197324 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.305 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.305 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.306 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.306 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.307 06:43:09 -- setup/common.sh@33 -- # echo 0 00:03:02.307 06:43:09 -- setup/common.sh@33 -- # return 0 00:03:02.307 06:43:09 -- setup/hugepages.sh@100 -- # resv=0 00:03:02.307 06:43:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:02.307 nr_hugepages=1024 00:03:02.307 06:43:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:02.307 resv_hugepages=0 00:03:02.307 06:43:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:02.307 surplus_hugepages=0 00:03:02.307 06:43:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:02.307 anon_hugepages=0 00:03:02.307 06:43:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.307 06:43:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:02.307 06:43:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:02.307 06:43:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:02.307 06:43:09 -- setup/common.sh@18 -- # local node= 00:03:02.307 06:43:09 -- setup/common.sh@19 -- # local var val 00:03:02.307 06:43:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.307 06:43:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.307 06:43:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.307 06:43:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.307 06:43:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.307 06:43:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41656020 kB' 'MemAvailable: 46304092 kB' 'Buffers: 2696 kB' 'Cached: 14341380 kB' 'SwapCached: 0 kB' 'Active: 10386608 kB' 'Inactive: 4457556 kB' 'Active(anon): 9820776 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503336 kB' 'Mapped: 233072 kB' 'Shmem: 9320688 kB' 'KReclaimable: 236616 kB' 'Slab: 621140 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384524 kB' 'KernelStack: 13072 kB' 'PageTables: 9440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10979252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197292 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.307 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.307 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.308 06:43:09 -- setup/common.sh@33 -- # echo 1024 00:03:02.308 06:43:09 -- setup/common.sh@33 -- # return 0 00:03:02.308 06:43:09 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.308 06:43:09 -- setup/hugepages.sh@112 -- # get_nodes 00:03:02.308 06:43:09 -- setup/hugepages.sh@27 -- # local node 00:03:02.308 06:43:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.308 06:43:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:02.308 06:43:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.308 06:43:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:02.308 06:43:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.308 06:43:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.308 06:43:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.308 06:43:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.308 06:43:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:02.308 06:43:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.308 06:43:09 -- setup/common.sh@18 -- # local node=0 00:03:02.308 06:43:09 -- setup/common.sh@19 -- # local var val 00:03:02.308 06:43:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.308 06:43:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.308 06:43:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:02.308 06:43:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:02.308 06:43:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.308 06:43:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21537616 kB' 'MemUsed: 11292268 kB' 'SwapCached: 0 kB' 'Active: 5913148 kB' 'Inactive: 3352072 kB' 'Active(anon): 5760600 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3352072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9127980 kB' 'Mapped: 101804 kB' 'AnonPages: 140392 kB' 'Shmem: 5623360 kB' 'KernelStack: 6024 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119252 kB' 'Slab: 335584 kB' 'SReclaimable: 119252 kB' 'SUnreclaim: 216332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.308 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.308 06:43:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@33 -- # echo 0 00:03:02.309 06:43:09 -- setup/common.sh@33 -- # return 0 00:03:02.309 06:43:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.309 06:43:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.309 06:43:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.309 06:43:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:02.309 06:43:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.309 06:43:09 -- setup/common.sh@18 -- # local node=1 00:03:02.309 06:43:09 -- setup/common.sh@19 -- # local var val 00:03:02.309 06:43:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.309 06:43:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.309 06:43:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:02.309 06:43:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:02.309 06:43:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.309 06:43:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711848 kB' 'MemFree: 20117408 kB' 'MemUsed: 7594440 kB' 'SwapCached: 0 kB' 'Active: 4473204 kB' 'Inactive: 1105484 kB' 'Active(anon): 4059920 kB' 'Inactive(anon): 0 kB' 'Active(file): 413284 kB' 'Inactive(file): 1105484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5216132 kB' 'Mapped: 131268 kB' 'AnonPages: 362640 kB' 'Shmem: 3697364 kB' 'KernelStack: 7016 kB' 'PageTables: 5456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117364 kB' 'Slab: 285556 kB' 'SReclaimable: 117364 kB' 'SUnreclaim: 168192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.309 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.309 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # continue 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.310 06:43:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.310 06:43:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.310 06:43:09 -- setup/common.sh@33 -- # echo 0 00:03:02.310 06:43:09 -- setup/common.sh@33 -- # return 0 00:03:02.310 06:43:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.310 06:43:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.310 06:43:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.310 06:43:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.310 06:43:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:02.310 node0=512 expecting 512 00:03:02.310 06:43:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.310 06:43:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.310 06:43:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.310 06:43:09 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:02.310 node1=512 expecting 512 00:03:02.310 06:43:09 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:02.310 00:03:02.310 real 0m1.425s 00:03:02.310 user 0m0.612s 00:03:02.310 sys 0m0.776s 00:03:02.310 06:43:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.310 06:43:09 -- common/autotest_common.sh@10 -- # set +x 00:03:02.310 ************************************ 00:03:02.310 END TEST per_node_1G_alloc 00:03:02.310 ************************************ 00:03:02.310 06:43:09 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:02.310 06:43:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:02.310 06:43:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:02.310 06:43:09 -- common/autotest_common.sh@10 -- # set +x 00:03:02.310 ************************************ 00:03:02.310 START TEST even_2G_alloc 00:03:02.310 ************************************ 00:03:02.310 06:43:09 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:02.310 06:43:09 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:02.310 06:43:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:02.310 06:43:09 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:02.310 06:43:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:02.310 06:43:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:02.310 06:43:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:02.310 06:43:09 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:02.310 06:43:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:02.310 06:43:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:02.310 06:43:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:02.310 06:43:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:02.311 06:43:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:02.311 06:43:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:02.311 06:43:09 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:02.311 06:43:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.311 06:43:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:02.311 06:43:09 -- setup/hugepages.sh@83 -- # : 512 00:03:02.311 06:43:09 -- setup/hugepages.sh@84 -- # : 1 00:03:02.311 06:43:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.311 06:43:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:02.311 06:43:09 -- setup/hugepages.sh@83 -- # : 0 00:03:02.311 06:43:09 -- setup/hugepages.sh@84 -- # : 0 00:03:02.311 06:43:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.311 06:43:09 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:02.311 06:43:09 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:02.311 06:43:09 -- setup/hugepages.sh@153 -- # setup output 00:03:02.311 06:43:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.311 06:43:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:03.691 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:03.691 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.691 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:03.691 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:03.691 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:03.691 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:03.691 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:03.691 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:03.691 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:03.691 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:03.691 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:03.691 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:03.691 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:03.691 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:03.692 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:03.692 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:03.692 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:03.692 06:43:10 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:03.692 06:43:10 -- setup/hugepages.sh@89 -- # local node 00:03:03.692 06:43:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.692 06:43:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.692 06:43:10 -- setup/hugepages.sh@92 -- # local surp 00:03:03.692 06:43:10 -- setup/hugepages.sh@93 -- # local resv 00:03:03.692 06:43:10 -- setup/hugepages.sh@94 -- # local anon 00:03:03.692 06:43:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.692 06:43:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.692 06:43:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.692 06:43:10 -- setup/common.sh@18 -- # local node= 00:03:03.692 06:43:10 -- setup/common.sh@19 -- # local var val 00:03:03.692 06:43:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.692 06:43:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.692 06:43:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.692 06:43:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.692 06:43:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.692 06:43:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41651868 kB' 'MemAvailable: 46299940 kB' 'Buffers: 2696 kB' 'Cached: 14341456 kB' 'SwapCached: 0 kB' 'Active: 10385896 kB' 'Inactive: 4457556 kB' 'Active(anon): 9820064 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502456 kB' 'Mapped: 233116 kB' 'Shmem: 9320764 kB' 'KReclaimable: 236616 kB' 'Slab: 621140 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384524 kB' 'KernelStack: 13056 kB' 'PageTables: 9360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10979796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197324 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.692 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.692 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.693 06:43:10 -- setup/common.sh@33 -- # echo 0 00:03:03.693 06:43:10 -- setup/common.sh@33 -- # return 0 00:03:03.693 06:43:10 -- setup/hugepages.sh@97 -- # anon=0 00:03:03.693 06:43:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.693 06:43:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.693 06:43:10 -- setup/common.sh@18 -- # local node= 00:03:03.693 06:43:10 -- setup/common.sh@19 -- # local var val 00:03:03.693 06:43:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.693 06:43:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.693 06:43:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.693 06:43:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.693 06:43:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.693 06:43:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41651868 kB' 'MemAvailable: 46299940 kB' 'Buffers: 2696 kB' 'Cached: 14341456 kB' 'SwapCached: 0 kB' 'Active: 10386568 kB' 'Inactive: 4457556 kB' 'Active(anon): 9820736 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503196 kB' 'Mapped: 233192 kB' 'Shmem: 9320764 kB' 'KReclaimable: 236616 kB' 'Slab: 621152 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384536 kB' 'KernelStack: 13072 kB' 'PageTables: 9376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10979808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197292 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.693 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.693 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.694 06:43:10 -- setup/common.sh@33 -- # echo 0 00:03:03.694 06:43:10 -- setup/common.sh@33 -- # return 0 00:03:03.694 06:43:10 -- setup/hugepages.sh@99 -- # surp=0 00:03:03.694 06:43:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.694 06:43:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.694 06:43:10 -- setup/common.sh@18 -- # local node= 00:03:03.694 06:43:10 -- setup/common.sh@19 -- # local var val 00:03:03.694 06:43:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.694 06:43:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.694 06:43:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.694 06:43:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.694 06:43:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.694 06:43:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41651868 kB' 'MemAvailable: 46299940 kB' 'Buffers: 2696 kB' 'Cached: 14341468 kB' 'SwapCached: 0 kB' 'Active: 10385976 kB' 'Inactive: 4457556 kB' 'Active(anon): 9820144 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502596 kB' 'Mapped: 233156 kB' 'Shmem: 9320776 kB' 'KReclaimable: 236616 kB' 'Slab: 621152 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384536 kB' 'KernelStack: 13040 kB' 'PageTables: 9260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10979824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197276 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.694 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.694 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.695 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.695 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.695 06:43:10 -- setup/common.sh@33 -- # echo 0 00:03:03.695 06:43:10 -- setup/common.sh@33 -- # return 0 00:03:03.695 06:43:10 -- setup/hugepages.sh@100 -- # resv=0 00:03:03.695 06:43:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:03.695 nr_hugepages=1024 00:03:03.696 06:43:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.696 resv_hugepages=0 00:03:03.696 06:43:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.696 surplus_hugepages=0 00:03:03.696 06:43:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.696 anon_hugepages=0 00:03:03.696 06:43:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.696 06:43:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:03.696 06:43:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.696 06:43:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.696 06:43:10 -- setup/common.sh@18 -- # local node= 00:03:03.696 06:43:10 -- setup/common.sh@19 -- # local var val 00:03:03.696 06:43:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.696 06:43:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.696 06:43:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.696 06:43:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.696 06:43:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.696 06:43:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41651868 kB' 'MemAvailable: 46299940 kB' 'Buffers: 2696 kB' 'Cached: 14341480 kB' 'SwapCached: 0 kB' 'Active: 10385528 kB' 'Inactive: 4457556 kB' 'Active(anon): 9819696 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502068 kB' 'Mapped: 233080 kB' 'Shmem: 9320788 kB' 'KReclaimable: 236616 kB' 'Slab: 621160 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384544 kB' 'KernelStack: 13008 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10979836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197292 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.696 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.696 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.697 06:43:10 -- setup/common.sh@33 -- # echo 1024 00:03:03.697 06:43:10 -- setup/common.sh@33 -- # return 0 00:03:03.697 06:43:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.697 06:43:10 -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.697 06:43:10 -- setup/hugepages.sh@27 -- # local node 00:03:03.697 06:43:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.697 06:43:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:03.697 06:43:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.697 06:43:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:03.697 06:43:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.697 06:43:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.697 06:43:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.697 06:43:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.697 06:43:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.697 06:43:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.697 06:43:10 -- setup/common.sh@18 -- # local node=0 00:03:03.697 06:43:10 -- setup/common.sh@19 -- # local var val 00:03:03.697 06:43:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.697 06:43:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.697 06:43:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.697 06:43:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.697 06:43:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.697 06:43:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21523412 kB' 'MemUsed: 11306472 kB' 'SwapCached: 0 kB' 'Active: 5912516 kB' 'Inactive: 3352072 kB' 'Active(anon): 5759968 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3352072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9128064 kB' 'Mapped: 101812 kB' 'AnonPages: 139720 kB' 'Shmem: 5623444 kB' 'KernelStack: 6056 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119252 kB' 'Slab: 335648 kB' 'SReclaimable: 119252 kB' 'SUnreclaim: 216396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.697 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.697 06:43:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@33 -- # echo 0 00:03:03.698 06:43:10 -- setup/common.sh@33 -- # return 0 00:03:03.698 06:43:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.698 06:43:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.698 06:43:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.698 06:43:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:03.698 06:43:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.698 06:43:10 -- setup/common.sh@18 -- # local node=1 00:03:03.698 06:43:10 -- setup/common.sh@19 -- # local var val 00:03:03.698 06:43:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.698 06:43:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.698 06:43:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:03.698 06:43:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:03.698 06:43:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.698 06:43:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711848 kB' 'MemFree: 20129168 kB' 'MemUsed: 7582680 kB' 'SwapCached: 0 kB' 'Active: 4473436 kB' 'Inactive: 1105484 kB' 'Active(anon): 4060152 kB' 'Inactive(anon): 0 kB' 'Active(file): 413284 kB' 'Inactive(file): 1105484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5216136 kB' 'Mapped: 131268 kB' 'AnonPages: 362908 kB' 'Shmem: 3697368 kB' 'KernelStack: 7032 kB' 'PageTables: 5504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117364 kB' 'Slab: 285516 kB' 'SReclaimable: 117364 kB' 'SUnreclaim: 168152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.698 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.698 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # continue 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.699 06:43:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.699 06:43:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.699 06:43:10 -- setup/common.sh@33 -- # echo 0 00:03:03.699 06:43:10 -- setup/common.sh@33 -- # return 0 00:03:03.699 06:43:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.699 06:43:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.699 06:43:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.699 06:43:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.699 06:43:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:03.699 node0=512 expecting 512 00:03:03.699 06:43:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.699 06:43:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.699 06:43:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.699 06:43:10 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:03.699 node1=512 expecting 512 00:03:03.699 06:43:10 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:03.699 00:03:03.699 real 0m1.426s 00:03:03.699 user 0m0.585s 00:03:03.699 sys 0m0.802s 00:03:03.699 06:43:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:03.699 06:43:10 -- common/autotest_common.sh@10 -- # set +x 00:03:03.699 ************************************ 00:03:03.699 END TEST even_2G_alloc 00:03:03.699 ************************************ 00:03:03.958 06:43:10 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:03.958 06:43:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:03.958 06:43:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:03.958 06:43:10 -- common/autotest_common.sh@10 -- # set +x 00:03:03.958 ************************************ 00:03:03.958 START TEST odd_alloc 00:03:03.958 ************************************ 00:03:03.958 06:43:10 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:03.958 06:43:10 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:03.958 06:43:10 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:03.958 06:43:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:03.958 06:43:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.958 06:43:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:03.958 06:43:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:03.958 06:43:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:03.958 06:43:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.958 06:43:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:03.958 06:43:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.958 06:43:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.958 06:43:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.958 06:43:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:03.958 06:43:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:03.958 06:43:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:03.958 06:43:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:03.958 06:43:10 -- setup/hugepages.sh@83 -- # : 513 00:03:03.958 06:43:10 -- setup/hugepages.sh@84 -- # : 1 00:03:03.958 06:43:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:03.958 06:43:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:03.958 06:43:10 -- setup/hugepages.sh@83 -- # : 0 00:03:03.958 06:43:10 -- setup/hugepages.sh@84 -- # : 0 00:03:03.958 06:43:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:03.958 06:43:10 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:03.958 06:43:10 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:03.958 06:43:10 -- setup/hugepages.sh@160 -- # setup output 00:03:03.958 06:43:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.958 06:43:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:04.891 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:04.891 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:04.891 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:04.891 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:04.891 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:04.891 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:04.891 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:04.891 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:04.891 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:04.891 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:04.891 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:04.891 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:04.891 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:04.891 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:04.891 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:04.891 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:04.891 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:05.154 06:43:12 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:05.154 06:43:12 -- setup/hugepages.sh@89 -- # local node 00:03:05.154 06:43:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.154 06:43:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.154 06:43:12 -- setup/hugepages.sh@92 -- # local surp 00:03:05.154 06:43:12 -- setup/hugepages.sh@93 -- # local resv 00:03:05.154 06:43:12 -- setup/hugepages.sh@94 -- # local anon 00:03:05.154 06:43:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.154 06:43:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.154 06:43:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.154 06:43:12 -- setup/common.sh@18 -- # local node= 00:03:05.154 06:43:12 -- setup/common.sh@19 -- # local var val 00:03:05.154 06:43:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.154 06:43:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.154 06:43:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.154 06:43:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.154 06:43:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.154 06:43:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41689192 kB' 'MemAvailable: 46337264 kB' 'Buffers: 2696 kB' 'Cached: 14341556 kB' 'SwapCached: 0 kB' 'Active: 10378588 kB' 'Inactive: 4457556 kB' 'Active(anon): 9812756 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494764 kB' 'Mapped: 232268 kB' 'Shmem: 9320864 kB' 'KReclaimable: 236616 kB' 'Slab: 621328 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384712 kB' 'KernelStack: 12928 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 10951080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197260 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.154 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.154 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.155 06:43:12 -- setup/common.sh@33 -- # echo 0 00:03:05.155 06:43:12 -- setup/common.sh@33 -- # return 0 00:03:05.155 06:43:12 -- setup/hugepages.sh@97 -- # anon=0 00:03:05.155 06:43:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.155 06:43:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.155 06:43:12 -- setup/common.sh@18 -- # local node= 00:03:05.155 06:43:12 -- setup/common.sh@19 -- # local var val 00:03:05.155 06:43:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.155 06:43:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.155 06:43:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.155 06:43:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.155 06:43:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.155 06:43:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41691996 kB' 'MemAvailable: 46340068 kB' 'Buffers: 2696 kB' 'Cached: 14341560 kB' 'SwapCached: 0 kB' 'Active: 10378484 kB' 'Inactive: 4457556 kB' 'Active(anon): 9812652 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495080 kB' 'Mapped: 232268 kB' 'Shmem: 9320868 kB' 'KReclaimable: 236616 kB' 'Slab: 621320 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384704 kB' 'KernelStack: 12928 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 10951092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.155 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.155 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.156 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.156 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.157 06:43:12 -- setup/common.sh@33 -- # echo 0 00:03:05.157 06:43:12 -- setup/common.sh@33 -- # return 0 00:03:05.157 06:43:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:05.157 06:43:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.157 06:43:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.157 06:43:12 -- setup/common.sh@18 -- # local node= 00:03:05.157 06:43:12 -- setup/common.sh@19 -- # local var val 00:03:05.157 06:43:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.157 06:43:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.157 06:43:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.157 06:43:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.157 06:43:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.157 06:43:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41691996 kB' 'MemAvailable: 46340068 kB' 'Buffers: 2696 kB' 'Cached: 14341572 kB' 'SwapCached: 0 kB' 'Active: 10377784 kB' 'Inactive: 4457556 kB' 'Active(anon): 9811952 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494288 kB' 'Mapped: 232152 kB' 'Shmem: 9320880 kB' 'KReclaimable: 236616 kB' 'Slab: 621288 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384672 kB' 'KernelStack: 12928 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 10951108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.157 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.157 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.158 06:43:12 -- setup/common.sh@33 -- # echo 0 00:03:05.158 06:43:12 -- setup/common.sh@33 -- # return 0 00:03:05.158 06:43:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:05.158 06:43:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:05.158 nr_hugepages=1025 00:03:05.158 06:43:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.158 resv_hugepages=0 00:03:05.158 06:43:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.158 surplus_hugepages=0 00:03:05.158 06:43:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.158 anon_hugepages=0 00:03:05.158 06:43:12 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:05.158 06:43:12 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:05.158 06:43:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.158 06:43:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.158 06:43:12 -- setup/common.sh@18 -- # local node= 00:03:05.158 06:43:12 -- setup/common.sh@19 -- # local var val 00:03:05.158 06:43:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.158 06:43:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.158 06:43:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.158 06:43:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.158 06:43:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.158 06:43:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41692632 kB' 'MemAvailable: 46340704 kB' 'Buffers: 2696 kB' 'Cached: 14341572 kB' 'SwapCached: 0 kB' 'Active: 10377448 kB' 'Inactive: 4457556 kB' 'Active(anon): 9811616 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493952 kB' 'Mapped: 232152 kB' 'Shmem: 9320880 kB' 'KReclaimable: 236616 kB' 'Slab: 621288 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384672 kB' 'KernelStack: 12912 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 10951120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.158 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.158 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.159 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.159 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.160 06:43:12 -- setup/common.sh@33 -- # echo 1025 00:03:05.160 06:43:12 -- setup/common.sh@33 -- # return 0 00:03:05.160 06:43:12 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:05.160 06:43:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.160 06:43:12 -- setup/hugepages.sh@27 -- # local node 00:03:05.160 06:43:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.160 06:43:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.160 06:43:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.160 06:43:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:05.160 06:43:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.160 06:43:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.160 06:43:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.160 06:43:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.160 06:43:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.160 06:43:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.160 06:43:12 -- setup/common.sh@18 -- # local node=0 00:03:05.160 06:43:12 -- setup/common.sh@19 -- # local var val 00:03:05.160 06:43:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.160 06:43:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.160 06:43:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.160 06:43:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.160 06:43:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.160 06:43:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21550468 kB' 'MemUsed: 11279416 kB' 'SwapCached: 0 kB' 'Active: 5908480 kB' 'Inactive: 3352072 kB' 'Active(anon): 5755932 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3352072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9128120 kB' 'Mapped: 100884 kB' 'AnonPages: 135664 kB' 'Shmem: 5623500 kB' 'KernelStack: 6040 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119252 kB' 'Slab: 335792 kB' 'SReclaimable: 119252 kB' 'SUnreclaim: 216540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.160 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.160 06:43:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@33 -- # echo 0 00:03:05.161 06:43:12 -- setup/common.sh@33 -- # return 0 00:03:05.161 06:43:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.161 06:43:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.161 06:43:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.161 06:43:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:05.161 06:43:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.161 06:43:12 -- setup/common.sh@18 -- # local node=1 00:03:05.161 06:43:12 -- setup/common.sh@19 -- # local var val 00:03:05.161 06:43:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.161 06:43:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.161 06:43:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:05.161 06:43:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:05.161 06:43:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.161 06:43:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711848 kB' 'MemFree: 20143824 kB' 'MemUsed: 7568024 kB' 'SwapCached: 0 kB' 'Active: 4469716 kB' 'Inactive: 1105484 kB' 'Active(anon): 4056432 kB' 'Inactive(anon): 0 kB' 'Active(file): 413284 kB' 'Inactive(file): 1105484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5216148 kB' 'Mapped: 131300 kB' 'AnonPages: 359116 kB' 'Shmem: 3697380 kB' 'KernelStack: 6920 kB' 'PageTables: 4920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117364 kB' 'Slab: 285488 kB' 'SReclaimable: 117364 kB' 'SUnreclaim: 168124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.161 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.161 06:43:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # continue 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.162 06:43:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.162 06:43:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.162 06:43:12 -- setup/common.sh@33 -- # echo 0 00:03:05.162 06:43:12 -- setup/common.sh@33 -- # return 0 00:03:05.162 06:43:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.162 06:43:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.162 06:43:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.162 06:43:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.162 06:43:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:05.162 node0=512 expecting 513 00:03:05.162 06:43:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.162 06:43:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.162 06:43:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.162 06:43:12 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:05.162 node1=513 expecting 512 00:03:05.162 06:43:12 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:05.162 00:03:05.162 real 0m1.374s 00:03:05.162 user 0m0.552s 00:03:05.162 sys 0m0.784s 00:03:05.162 06:43:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.162 06:43:12 -- common/autotest_common.sh@10 -- # set +x 00:03:05.162 ************************************ 00:03:05.162 END TEST odd_alloc 00:03:05.162 ************************************ 00:03:05.162 06:43:12 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:05.162 06:43:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:05.162 06:43:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:05.162 06:43:12 -- common/autotest_common.sh@10 -- # set +x 00:03:05.162 ************************************ 00:03:05.162 START TEST custom_alloc 00:03:05.162 ************************************ 00:03:05.162 06:43:12 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:05.162 06:43:12 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:05.162 06:43:12 -- setup/hugepages.sh@169 -- # local node 00:03:05.162 06:43:12 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:05.162 06:43:12 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:05.162 06:43:12 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:05.162 06:43:12 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:05.162 06:43:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:05.162 06:43:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:05.162 06:43:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.162 06:43:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:05.162 06:43:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:05.163 06:43:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.163 06:43:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.163 06:43:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:05.163 06:43:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.163 06:43:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.163 06:43:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.163 06:43:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:05.163 06:43:12 -- setup/hugepages.sh@83 -- # : 256 00:03:05.163 06:43:12 -- setup/hugepages.sh@84 -- # : 1 00:03:05.163 06:43:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:05.163 06:43:12 -- setup/hugepages.sh@83 -- # : 0 00:03:05.163 06:43:12 -- setup/hugepages.sh@84 -- # : 0 00:03:05.163 06:43:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:05.163 06:43:12 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:05.163 06:43:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:05.163 06:43:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:05.163 06:43:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:05.163 06:43:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.163 06:43:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.163 06:43:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.163 06:43:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.163 06:43:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.163 06:43:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.163 06:43:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:05.163 06:43:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:05.163 06:43:12 -- setup/hugepages.sh@78 -- # return 0 00:03:05.163 06:43:12 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:05.163 06:43:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:05.163 06:43:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:05.163 06:43:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:05.163 06:43:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:05.163 06:43:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:05.163 06:43:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.163 06:43:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.163 06:43:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.163 06:43:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.163 06:43:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.163 06:43:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.163 06:43:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:05.163 06:43:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:05.163 06:43:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:05.163 06:43:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:05.163 06:43:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:05.163 06:43:12 -- setup/hugepages.sh@78 -- # return 0 00:03:05.163 06:43:12 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:05.163 06:43:12 -- setup/hugepages.sh@187 -- # setup output 00:03:05.163 06:43:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.163 06:43:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:06.538 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:06.538 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:06.538 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:06.538 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:06.538 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:06.538 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:06.538 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:06.538 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:06.538 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:06.538 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:06.538 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:06.538 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:06.538 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:06.538 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:06.538 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:06.538 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:06.538 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:06.538 06:43:13 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:06.538 06:43:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:06.538 06:43:13 -- setup/hugepages.sh@89 -- # local node 00:03:06.538 06:43:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.538 06:43:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.538 06:43:13 -- setup/hugepages.sh@92 -- # local surp 00:03:06.538 06:43:13 -- setup/hugepages.sh@93 -- # local resv 00:03:06.538 06:43:13 -- setup/hugepages.sh@94 -- # local anon 00:03:06.538 06:43:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.538 06:43:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.538 06:43:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.538 06:43:13 -- setup/common.sh@18 -- # local node= 00:03:06.538 06:43:13 -- setup/common.sh@19 -- # local var val 00:03:06.538 06:43:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.538 06:43:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.538 06:43:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.538 06:43:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.538 06:43:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.538 06:43:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.538 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.538 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 40632148 kB' 'MemAvailable: 45280220 kB' 'Buffers: 2696 kB' 'Cached: 14341648 kB' 'SwapCached: 0 kB' 'Active: 10378100 kB' 'Inactive: 4457556 kB' 'Active(anon): 9812268 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494556 kB' 'Mapped: 232312 kB' 'Shmem: 9320956 kB' 'KReclaimable: 236616 kB' 'Slab: 621184 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384568 kB' 'KernelStack: 12896 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 10951304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197276 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.539 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.539 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.540 06:43:13 -- setup/common.sh@33 -- # echo 0 00:03:06.540 06:43:13 -- setup/common.sh@33 -- # return 0 00:03:06.540 06:43:13 -- setup/hugepages.sh@97 -- # anon=0 00:03:06.540 06:43:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.540 06:43:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.540 06:43:13 -- setup/common.sh@18 -- # local node= 00:03:06.540 06:43:13 -- setup/common.sh@19 -- # local var val 00:03:06.540 06:43:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.540 06:43:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.540 06:43:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.540 06:43:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.540 06:43:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.540 06:43:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 40632460 kB' 'MemAvailable: 45280532 kB' 'Buffers: 2696 kB' 'Cached: 14341652 kB' 'SwapCached: 0 kB' 'Active: 10378672 kB' 'Inactive: 4457556 kB' 'Active(anon): 9812840 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495152 kB' 'Mapped: 232312 kB' 'Shmem: 9320960 kB' 'KReclaimable: 236616 kB' 'Slab: 621184 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384568 kB' 'KernelStack: 12928 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 10951316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.540 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.540 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.541 06:43:13 -- setup/common.sh@33 -- # echo 0 00:03:06.541 06:43:13 -- setup/common.sh@33 -- # return 0 00:03:06.541 06:43:13 -- setup/hugepages.sh@99 -- # surp=0 00:03:06.541 06:43:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.541 06:43:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.541 06:43:13 -- setup/common.sh@18 -- # local node= 00:03:06.541 06:43:13 -- setup/common.sh@19 -- # local var val 00:03:06.541 06:43:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.541 06:43:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.541 06:43:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.541 06:43:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.541 06:43:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.541 06:43:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 40632212 kB' 'MemAvailable: 45280284 kB' 'Buffers: 2696 kB' 'Cached: 14341664 kB' 'SwapCached: 0 kB' 'Active: 10377920 kB' 'Inactive: 4457556 kB' 'Active(anon): 9812088 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494352 kB' 'Mapped: 232160 kB' 'Shmem: 9320972 kB' 'KReclaimable: 236616 kB' 'Slab: 621176 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384560 kB' 'KernelStack: 12928 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 10951332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.541 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.541 06:43:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.542 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.542 06:43:13 -- setup/common.sh@33 -- # echo 0 00:03:06.542 06:43:13 -- setup/common.sh@33 -- # return 0 00:03:06.542 06:43:13 -- setup/hugepages.sh@100 -- # resv=0 00:03:06.542 06:43:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:06.542 nr_hugepages=1536 00:03:06.542 06:43:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.542 resv_hugepages=0 00:03:06.542 06:43:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.542 surplus_hugepages=0 00:03:06.542 06:43:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.542 anon_hugepages=0 00:03:06.542 06:43:13 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:06.542 06:43:13 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:06.542 06:43:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.542 06:43:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.542 06:43:13 -- setup/common.sh@18 -- # local node= 00:03:06.542 06:43:13 -- setup/common.sh@19 -- # local var val 00:03:06.542 06:43:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.542 06:43:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.542 06:43:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.542 06:43:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.542 06:43:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.542 06:43:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.542 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 40631960 kB' 'MemAvailable: 45280032 kB' 'Buffers: 2696 kB' 'Cached: 14341676 kB' 'SwapCached: 0 kB' 'Active: 10377908 kB' 'Inactive: 4457556 kB' 'Active(anon): 9812076 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494320 kB' 'Mapped: 232160 kB' 'Shmem: 9320984 kB' 'KReclaimable: 236616 kB' 'Slab: 621176 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384560 kB' 'KernelStack: 12912 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 10951344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.543 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.543 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.544 06:43:13 -- setup/common.sh@33 -- # echo 1536 00:03:06.544 06:43:13 -- setup/common.sh@33 -- # return 0 00:03:06.544 06:43:13 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:06.544 06:43:13 -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.544 06:43:13 -- setup/hugepages.sh@27 -- # local node 00:03:06.544 06:43:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.544 06:43:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:06.544 06:43:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.544 06:43:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:06.544 06:43:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.544 06:43:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.544 06:43:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.544 06:43:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.544 06:43:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.544 06:43:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.544 06:43:13 -- setup/common.sh@18 -- # local node=0 00:03:06.544 06:43:13 -- setup/common.sh@19 -- # local var val 00:03:06.544 06:43:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.544 06:43:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.544 06:43:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.544 06:43:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.544 06:43:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.544 06:43:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21535832 kB' 'MemUsed: 11294052 kB' 'SwapCached: 0 kB' 'Active: 5907880 kB' 'Inactive: 3352072 kB' 'Active(anon): 5755332 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3352072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9128224 kB' 'Mapped: 100892 kB' 'AnonPages: 134932 kB' 'Shmem: 5623604 kB' 'KernelStack: 6008 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119252 kB' 'Slab: 335796 kB' 'SReclaimable: 119252 kB' 'SUnreclaim: 216544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.544 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.544 06:43:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@33 -- # echo 0 00:03:06.545 06:43:13 -- setup/common.sh@33 -- # return 0 00:03:06.545 06:43:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.545 06:43:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.545 06:43:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.545 06:43:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:06.545 06:43:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.545 06:43:13 -- setup/common.sh@18 -- # local node=1 00:03:06.545 06:43:13 -- setup/common.sh@19 -- # local var val 00:03:06.545 06:43:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.545 06:43:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.545 06:43:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:06.545 06:43:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:06.545 06:43:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.545 06:43:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711848 kB' 'MemFree: 19096128 kB' 'MemUsed: 8615720 kB' 'SwapCached: 0 kB' 'Active: 4469780 kB' 'Inactive: 1105484 kB' 'Active(anon): 4056496 kB' 'Inactive(anon): 0 kB' 'Active(file): 413284 kB' 'Inactive(file): 1105484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5216152 kB' 'Mapped: 131268 kB' 'AnonPages: 359136 kB' 'Shmem: 3697384 kB' 'KernelStack: 6920 kB' 'PageTables: 4868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117364 kB' 'Slab: 285380 kB' 'SReclaimable: 117364 kB' 'SUnreclaim: 168016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.545 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.545 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # continue 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.546 06:43:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.546 06:43:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.546 06:43:13 -- setup/common.sh@33 -- # echo 0 00:03:06.546 06:43:13 -- setup/common.sh@33 -- # return 0 00:03:06.546 06:43:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.546 06:43:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.546 06:43:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.546 06:43:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.546 06:43:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:06.546 node0=512 expecting 512 00:03:06.546 06:43:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.546 06:43:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.546 06:43:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.546 06:43:13 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:06.546 node1=1024 expecting 1024 00:03:06.546 06:43:13 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:06.546 00:03:06.546 real 0m1.389s 00:03:06.546 user 0m0.622s 00:03:06.546 sys 0m0.726s 00:03:06.546 06:43:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.546 06:43:13 -- common/autotest_common.sh@10 -- # set +x 00:03:06.546 ************************************ 00:03:06.546 END TEST custom_alloc 00:03:06.546 ************************************ 00:03:06.546 06:43:13 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:06.546 06:43:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.546 06:43:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.546 06:43:13 -- common/autotest_common.sh@10 -- # set +x 00:03:06.546 ************************************ 00:03:06.546 START TEST no_shrink_alloc 00:03:06.546 ************************************ 00:03:06.546 06:43:13 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:06.546 06:43:13 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:06.546 06:43:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.546 06:43:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:06.546 06:43:13 -- setup/hugepages.sh@51 -- # shift 00:03:06.546 06:43:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:06.546 06:43:13 -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.546 06:43:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.546 06:43:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.546 06:43:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:06.546 06:43:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:06.546 06:43:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.546 06:43:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.546 06:43:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.546 06:43:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.546 06:43:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.546 06:43:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:06.546 06:43:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.546 06:43:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:06.546 06:43:13 -- setup/hugepages.sh@73 -- # return 0 00:03:06.546 06:43:13 -- setup/hugepages.sh@198 -- # setup output 00:03:06.546 06:43:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.546 06:43:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.923 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:07.923 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:07.923 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:07.923 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:07.923 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:07.923 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:07.923 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:07.923 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:07.923 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:07.923 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:07.923 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:07.923 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:07.923 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:07.923 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:07.923 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:07.923 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:07.923 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:07.923 06:43:14 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:07.923 06:43:14 -- setup/hugepages.sh@89 -- # local node 00:03:07.923 06:43:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.923 06:43:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.923 06:43:14 -- setup/hugepages.sh@92 -- # local surp 00:03:07.923 06:43:14 -- setup/hugepages.sh@93 -- # local resv 00:03:07.923 06:43:14 -- setup/hugepages.sh@94 -- # local anon 00:03:07.923 06:43:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.923 06:43:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.923 06:43:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.923 06:43:14 -- setup/common.sh@18 -- # local node= 00:03:07.923 06:43:14 -- setup/common.sh@19 -- # local var val 00:03:07.923 06:43:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.923 06:43:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.923 06:43:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.923 06:43:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.923 06:43:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.923 06:43:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.923 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41664960 kB' 'MemAvailable: 46313032 kB' 'Buffers: 2696 kB' 'Cached: 14341740 kB' 'SwapCached: 0 kB' 'Active: 10383912 kB' 'Inactive: 4457556 kB' 'Active(anon): 9818080 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500312 kB' 'Mapped: 233112 kB' 'Shmem: 9321048 kB' 'KReclaimable: 236616 kB' 'Slab: 621100 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384484 kB' 'KernelStack: 12960 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10957648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197184 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 06:43:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.925 06:43:14 -- setup/common.sh@33 -- # echo 0 00:03:07.925 06:43:14 -- setup/common.sh@33 -- # return 0 00:03:07.925 06:43:14 -- setup/hugepages.sh@97 -- # anon=0 00:03:07.925 06:43:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.925 06:43:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.925 06:43:14 -- setup/common.sh@18 -- # local node= 00:03:07.925 06:43:14 -- setup/common.sh@19 -- # local var val 00:03:07.925 06:43:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.925 06:43:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.925 06:43:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.925 06:43:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.925 06:43:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.925 06:43:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41665328 kB' 'MemAvailable: 46313400 kB' 'Buffers: 2696 kB' 'Cached: 14341744 kB' 'SwapCached: 0 kB' 'Active: 10378528 kB' 'Inactive: 4457556 kB' 'Active(anon): 9812696 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495008 kB' 'Mapped: 232676 kB' 'Shmem: 9321052 kB' 'KReclaimable: 236616 kB' 'Slab: 621100 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384484 kB' 'KernelStack: 12960 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10951540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.925 06:43:14 -- setup/common.sh@32 -- # continue 00:03:07.925 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.925 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.925 06:43:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.926 06:43:15 -- setup/common.sh@33 -- # echo 0 00:03:07.926 06:43:15 -- setup/common.sh@33 -- # return 0 00:03:07.926 06:43:15 -- setup/hugepages.sh@99 -- # surp=0 00:03:07.926 06:43:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.926 06:43:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.926 06:43:15 -- setup/common.sh@18 -- # local node= 00:03:07.926 06:43:15 -- setup/common.sh@19 -- # local var val 00:03:07.926 06:43:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.926 06:43:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.926 06:43:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.926 06:43:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.926 06:43:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.926 06:43:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41665712 kB' 'MemAvailable: 46313784 kB' 'Buffers: 2696 kB' 'Cached: 14341756 kB' 'SwapCached: 0 kB' 'Active: 10378124 kB' 'Inactive: 4457556 kB' 'Active(anon): 9812292 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494520 kB' 'Mapped: 232168 kB' 'Shmem: 9321064 kB' 'KReclaimable: 236616 kB' 'Slab: 621072 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384456 kB' 'KernelStack: 12928 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10951556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.926 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.926 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.927 06:43:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.927 06:43:15 -- setup/common.sh@33 -- # echo 0 00:03:07.927 06:43:15 -- setup/common.sh@33 -- # return 0 00:03:07.927 06:43:15 -- setup/hugepages.sh@100 -- # resv=0 00:03:07.927 06:43:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:07.927 nr_hugepages=1024 00:03:07.927 06:43:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.927 resv_hugepages=0 00:03:07.927 06:43:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.927 surplus_hugepages=0 00:03:07.927 06:43:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.927 anon_hugepages=0 00:03:07.927 06:43:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.927 06:43:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:07.927 06:43:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.927 06:43:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.927 06:43:15 -- setup/common.sh@18 -- # local node= 00:03:07.927 06:43:15 -- setup/common.sh@19 -- # local var val 00:03:07.927 06:43:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.927 06:43:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.927 06:43:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.927 06:43:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.927 06:43:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.927 06:43:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.927 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41666528 kB' 'MemAvailable: 46314600 kB' 'Buffers: 2696 kB' 'Cached: 14341772 kB' 'SwapCached: 0 kB' 'Active: 10378140 kB' 'Inactive: 4457556 kB' 'Active(anon): 9812308 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494528 kB' 'Mapped: 232168 kB' 'Shmem: 9321080 kB' 'KReclaimable: 236616 kB' 'Slab: 621068 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 384452 kB' 'KernelStack: 12928 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10951572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.928 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.928 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.187 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.187 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.188 06:43:15 -- setup/common.sh@33 -- # echo 1024 00:03:08.188 06:43:15 -- setup/common.sh@33 -- # return 0 00:03:08.188 06:43:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.188 06:43:15 -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.188 06:43:15 -- setup/hugepages.sh@27 -- # local node 00:03:08.188 06:43:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.188 06:43:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:08.188 06:43:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.188 06:43:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:08.188 06:43:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.188 06:43:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.188 06:43:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.188 06:43:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.188 06:43:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.188 06:43:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.188 06:43:15 -- setup/common.sh@18 -- # local node=0 00:03:08.188 06:43:15 -- setup/common.sh@19 -- # local var val 00:03:08.188 06:43:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.188 06:43:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.188 06:43:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.188 06:43:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.188 06:43:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.188 06:43:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20485896 kB' 'MemUsed: 12343988 kB' 'SwapCached: 0 kB' 'Active: 5908424 kB' 'Inactive: 3352072 kB' 'Active(anon): 5755876 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3352072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9128316 kB' 'Mapped: 100900 kB' 'AnonPages: 135408 kB' 'Shmem: 5623696 kB' 'KernelStack: 6008 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119252 kB' 'Slab: 335696 kB' 'SReclaimable: 119252 kB' 'SUnreclaim: 216444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.188 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.188 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # continue 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.189 06:43:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.189 06:43:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.189 06:43:15 -- setup/common.sh@33 -- # echo 0 00:03:08.189 06:43:15 -- setup/common.sh@33 -- # return 0 00:03:08.189 06:43:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.189 06:43:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.189 06:43:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.189 06:43:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.189 06:43:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:08.189 node0=1024 expecting 1024 00:03:08.189 06:43:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:08.189 06:43:15 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:08.189 06:43:15 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:08.189 06:43:15 -- setup/hugepages.sh@202 -- # setup output 00:03:08.189 06:43:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.189 06:43:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.123 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:09.123 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.123 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:09.123 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:09.123 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:09.123 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:09.386 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:09.386 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:09.386 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:09.386 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:09.386 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:09.386 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:09.386 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:09.386 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:09.386 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:09.386 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:09.386 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:09.386 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:09.386 06:43:16 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:09.386 06:43:16 -- setup/hugepages.sh@89 -- # local node 00:03:09.386 06:43:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.386 06:43:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.386 06:43:16 -- setup/hugepages.sh@92 -- # local surp 00:03:09.386 06:43:16 -- setup/hugepages.sh@93 -- # local resv 00:03:09.386 06:43:16 -- setup/hugepages.sh@94 -- # local anon 00:03:09.386 06:43:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.386 06:43:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.386 06:43:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.386 06:43:16 -- setup/common.sh@18 -- # local node= 00:03:09.386 06:43:16 -- setup/common.sh@19 -- # local var val 00:03:09.386 06:43:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.386 06:43:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.386 06:43:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.386 06:43:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.386 06:43:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.386 06:43:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41666452 kB' 'MemAvailable: 46314524 kB' 'Buffers: 2696 kB' 'Cached: 14341824 kB' 'SwapCached: 0 kB' 'Active: 10379004 kB' 'Inactive: 4457556 kB' 'Active(anon): 9813172 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495400 kB' 'Mapped: 232348 kB' 'Shmem: 9321132 kB' 'KReclaimable: 236616 kB' 'Slab: 620512 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 383896 kB' 'KernelStack: 12912 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10951744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.386 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.386 06:43:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.387 06:43:16 -- setup/common.sh@33 -- # echo 0 00:03:09.387 06:43:16 -- setup/common.sh@33 -- # return 0 00:03:09.387 06:43:16 -- setup/hugepages.sh@97 -- # anon=0 00:03:09.387 06:43:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.387 06:43:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.387 06:43:16 -- setup/common.sh@18 -- # local node= 00:03:09.387 06:43:16 -- setup/common.sh@19 -- # local var val 00:03:09.387 06:43:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.387 06:43:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.387 06:43:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.387 06:43:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.387 06:43:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.387 06:43:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41666580 kB' 'MemAvailable: 46314652 kB' 'Buffers: 2696 kB' 'Cached: 14341828 kB' 'SwapCached: 0 kB' 'Active: 10378612 kB' 'Inactive: 4457556 kB' 'Active(anon): 9812780 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494592 kB' 'Mapped: 232348 kB' 'Shmem: 9321136 kB' 'KReclaimable: 236616 kB' 'Slab: 620512 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 383896 kB' 'KernelStack: 12864 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10951756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197116 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.387 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.387 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.388 06:43:16 -- setup/common.sh@33 -- # echo 0 00:03:09.388 06:43:16 -- setup/common.sh@33 -- # return 0 00:03:09.388 06:43:16 -- setup/hugepages.sh@99 -- # surp=0 00:03:09.388 06:43:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.388 06:43:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.388 06:43:16 -- setup/common.sh@18 -- # local node= 00:03:09.388 06:43:16 -- setup/common.sh@19 -- # local var val 00:03:09.388 06:43:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.388 06:43:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.388 06:43:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.388 06:43:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.388 06:43:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.388 06:43:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41666332 kB' 'MemAvailable: 46314404 kB' 'Buffers: 2696 kB' 'Cached: 14341840 kB' 'SwapCached: 0 kB' 'Active: 10377992 kB' 'Inactive: 4457556 kB' 'Active(anon): 9812160 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494368 kB' 'Mapped: 232176 kB' 'Shmem: 9321148 kB' 'KReclaimable: 236616 kB' 'Slab: 620536 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 383920 kB' 'KernelStack: 12944 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10951772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197100 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.388 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.388 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.389 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.389 06:43:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.390 06:43:16 -- setup/common.sh@33 -- # echo 0 00:03:09.390 06:43:16 -- setup/common.sh@33 -- # return 0 00:03:09.390 06:43:16 -- setup/hugepages.sh@100 -- # resv=0 00:03:09.390 06:43:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:09.390 nr_hugepages=1024 00:03:09.390 06:43:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.390 resv_hugepages=0 00:03:09.390 06:43:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.390 surplus_hugepages=0 00:03:09.390 06:43:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.390 anon_hugepages=0 00:03:09.390 06:43:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.390 06:43:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:09.390 06:43:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.390 06:43:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.390 06:43:16 -- setup/common.sh@18 -- # local node= 00:03:09.390 06:43:16 -- setup/common.sh@19 -- # local var val 00:03:09.390 06:43:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.390 06:43:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.390 06:43:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.390 06:43:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.390 06:43:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.390 06:43:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541732 kB' 'MemFree: 41666584 kB' 'MemAvailable: 46314656 kB' 'Buffers: 2696 kB' 'Cached: 14341852 kB' 'SwapCached: 0 kB' 'Active: 10378228 kB' 'Inactive: 4457556 kB' 'Active(anon): 9812396 kB' 'Inactive(anon): 0 kB' 'Active(file): 565832 kB' 'Inactive(file): 4457556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494596 kB' 'Mapped: 232176 kB' 'Shmem: 9321160 kB' 'KReclaimable: 236616 kB' 'Slab: 620536 kB' 'SReclaimable: 236616 kB' 'SUnreclaim: 383920 kB' 'KernelStack: 12960 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10951788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197100 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2836060 kB' 'DirectMap2M: 20152320 kB' 'DirectMap1G: 46137344 kB' 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.390 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.390 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.391 06:43:16 -- setup/common.sh@33 -- # echo 1024 00:03:09.391 06:43:16 -- setup/common.sh@33 -- # return 0 00:03:09.391 06:43:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.391 06:43:16 -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.391 06:43:16 -- setup/hugepages.sh@27 -- # local node 00:03:09.391 06:43:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.391 06:43:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:09.391 06:43:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.391 06:43:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:09.391 06:43:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.391 06:43:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.391 06:43:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.391 06:43:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.391 06:43:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.391 06:43:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.391 06:43:16 -- setup/common.sh@18 -- # local node=0 00:03:09.391 06:43:16 -- setup/common.sh@19 -- # local var val 00:03:09.391 06:43:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.391 06:43:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.391 06:43:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.391 06:43:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.391 06:43:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.391 06:43:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20478608 kB' 'MemUsed: 12351276 kB' 'SwapCached: 0 kB' 'Active: 5908952 kB' 'Inactive: 3352072 kB' 'Active(anon): 5756404 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3352072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9128400 kB' 'Mapped: 100908 kB' 'AnonPages: 135948 kB' 'Shmem: 5623780 kB' 'KernelStack: 6040 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119252 kB' 'Slab: 335392 kB' 'SReclaimable: 119252 kB' 'SUnreclaim: 216140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.391 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.391 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # continue 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.392 06:43:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.392 06:43:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.392 06:43:16 -- setup/common.sh@33 -- # echo 0 00:03:09.392 06:43:16 -- setup/common.sh@33 -- # return 0 00:03:09.392 06:43:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.392 06:43:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.392 06:43:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.651 06:43:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.651 06:43:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:09.651 node0=1024 expecting 1024 00:03:09.651 06:43:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:09.651 00:03:09.651 real 0m2.867s 00:03:09.651 user 0m1.162s 00:03:09.651 sys 0m1.629s 00:03:09.651 06:43:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.651 06:43:16 -- common/autotest_common.sh@10 -- # set +x 00:03:09.651 ************************************ 00:03:09.651 END TEST no_shrink_alloc 00:03:09.651 ************************************ 00:03:09.651 06:43:16 -- setup/hugepages.sh@217 -- # clear_hp 00:03:09.651 06:43:16 -- setup/hugepages.sh@37 -- # local node hp 00:03:09.651 06:43:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:09.651 06:43:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.651 06:43:16 -- setup/hugepages.sh@41 -- # echo 0 00:03:09.651 06:43:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.651 06:43:16 -- setup/hugepages.sh@41 -- # echo 0 00:03:09.651 06:43:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:09.651 06:43:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.651 06:43:16 -- setup/hugepages.sh@41 -- # echo 0 00:03:09.651 06:43:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.651 06:43:16 -- setup/hugepages.sh@41 -- # echo 0 00:03:09.651 06:43:16 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:09.651 06:43:16 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:09.651 00:03:09.651 real 0m11.238s 00:03:09.651 user 0m4.285s 00:03:09.651 sys 0m5.692s 00:03:09.651 06:43:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.651 06:43:16 -- common/autotest_common.sh@10 -- # set +x 00:03:09.651 ************************************ 00:03:09.651 END TEST hugepages 00:03:09.651 ************************************ 00:03:09.651 06:43:16 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:09.651 06:43:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:09.651 06:43:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:09.651 06:43:16 -- common/autotest_common.sh@10 -- # set +x 00:03:09.651 ************************************ 00:03:09.651 START TEST driver 00:03:09.651 ************************************ 00:03:09.651 06:43:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:09.651 * Looking for test storage... 00:03:09.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.651 06:43:16 -- setup/driver.sh@68 -- # setup reset 00:03:09.651 06:43:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.651 06:43:16 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.181 06:43:18 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:12.181 06:43:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:12.181 06:43:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:12.181 06:43:18 -- common/autotest_common.sh@10 -- # set +x 00:03:12.181 ************************************ 00:03:12.181 START TEST guess_driver 00:03:12.181 ************************************ 00:03:12.181 06:43:18 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:12.181 06:43:18 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:12.181 06:43:18 -- setup/driver.sh@47 -- # local fail=0 00:03:12.181 06:43:18 -- setup/driver.sh@49 -- # pick_driver 00:03:12.181 06:43:18 -- setup/driver.sh@36 -- # vfio 00:03:12.181 06:43:18 -- setup/driver.sh@21 -- # local iommu_grups 00:03:12.181 06:43:18 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:12.181 06:43:18 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:12.181 06:43:18 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:12.181 06:43:18 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:12.181 06:43:18 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:12.181 06:43:18 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:12.181 06:43:18 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:12.181 06:43:18 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:12.181 06:43:18 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:12.181 06:43:18 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:12.181 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:12.181 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:12.181 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:12.181 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:12.181 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:12.181 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:12.181 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:12.181 06:43:18 -- setup/driver.sh@30 -- # return 0 00:03:12.181 06:43:18 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:12.181 06:43:18 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:12.181 06:43:18 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:12.181 06:43:18 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:12.181 Looking for driver=vfio-pci 00:03:12.181 06:43:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.181 06:43:18 -- setup/driver.sh@45 -- # setup output config 00:03:12.181 06:43:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.181 06:43:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:13.116 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.116 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.116 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.116 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.116 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.116 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.116 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.116 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.116 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.116 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.116 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.116 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.116 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.116 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.116 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.116 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.116 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.116 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.116 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.116 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.116 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.116 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.116 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.116 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.116 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.116 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.116 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.116 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.116 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.116 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.117 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.117 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.117 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.117 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.117 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.117 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.117 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.117 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.117 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.117 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.117 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.117 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.117 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.117 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.117 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.117 06:43:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.117 06:43:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.117 06:43:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:14.069 06:43:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:14.069 06:43:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:14.069 06:43:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:14.328 06:43:21 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:14.328 06:43:21 -- setup/driver.sh@65 -- # setup reset 00:03:14.328 06:43:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.328 06:43:21 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.857 00:03:16.857 real 0m4.558s 00:03:16.857 user 0m0.993s 00:03:16.857 sys 0m1.712s 00:03:16.857 06:43:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.857 06:43:23 -- common/autotest_common.sh@10 -- # set +x 00:03:16.857 ************************************ 00:03:16.857 END TEST guess_driver 00:03:16.857 ************************************ 00:03:16.857 00:03:16.857 real 0m6.981s 00:03:16.857 user 0m1.549s 00:03:16.857 sys 0m2.715s 00:03:16.857 06:43:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.857 06:43:23 -- common/autotest_common.sh@10 -- # set +x 00:03:16.857 ************************************ 00:03:16.857 END TEST driver 00:03:16.857 ************************************ 00:03:16.857 06:43:23 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:16.857 06:43:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:16.857 06:43:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:16.857 06:43:23 -- common/autotest_common.sh@10 -- # set +x 00:03:16.857 ************************************ 00:03:16.857 START TEST devices 00:03:16.857 ************************************ 00:03:16.857 06:43:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:16.857 * Looking for test storage... 00:03:16.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.857 06:43:23 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:16.857 06:43:23 -- setup/devices.sh@192 -- # setup reset 00:03:16.857 06:43:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.857 06:43:23 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.233 06:43:25 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:18.233 06:43:25 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:18.233 06:43:25 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:18.233 06:43:25 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:18.233 06:43:25 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:18.233 06:43:25 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:18.233 06:43:25 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:18.233 06:43:25 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:18.233 06:43:25 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:18.233 06:43:25 -- setup/devices.sh@196 -- # blocks=() 00:03:18.233 06:43:25 -- setup/devices.sh@196 -- # declare -a blocks 00:03:18.233 06:43:25 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:18.233 06:43:25 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:18.233 06:43:25 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:18.233 06:43:25 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:18.233 06:43:25 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:18.233 06:43:25 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:18.233 06:43:25 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:18.233 06:43:25 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:18.233 06:43:25 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:18.233 06:43:25 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:18.233 06:43:25 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:18.233 No valid GPT data, bailing 00:03:18.233 06:43:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:18.233 06:43:25 -- scripts/common.sh@393 -- # pt= 00:03:18.233 06:43:25 -- scripts/common.sh@394 -- # return 1 00:03:18.233 06:43:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:18.233 06:43:25 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:18.233 06:43:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:18.233 06:43:25 -- setup/common.sh@80 -- # echo 1000204886016 00:03:18.233 06:43:25 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:18.233 06:43:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:18.233 06:43:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:18.233 06:43:25 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:18.233 06:43:25 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:18.233 06:43:25 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:18.233 06:43:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:18.233 06:43:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:18.233 06:43:25 -- common/autotest_common.sh@10 -- # set +x 00:03:18.233 ************************************ 00:03:18.233 START TEST nvme_mount 00:03:18.233 ************************************ 00:03:18.233 06:43:25 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:18.233 06:43:25 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:18.233 06:43:25 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:18.233 06:43:25 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.233 06:43:25 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:18.233 06:43:25 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:18.233 06:43:25 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:18.233 06:43:25 -- setup/common.sh@40 -- # local part_no=1 00:03:18.233 06:43:25 -- setup/common.sh@41 -- # local size=1073741824 00:03:18.233 06:43:25 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:18.233 06:43:25 -- setup/common.sh@44 -- # parts=() 00:03:18.233 06:43:25 -- setup/common.sh@44 -- # local parts 00:03:18.233 06:43:25 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:18.233 06:43:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:18.233 06:43:25 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:18.233 06:43:25 -- setup/common.sh@46 -- # (( part++ )) 00:03:18.233 06:43:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:18.233 06:43:25 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:18.233 06:43:25 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:18.233 06:43:25 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:19.167 Creating new GPT entries in memory. 00:03:19.167 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:19.167 other utilities. 00:03:19.167 06:43:26 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:19.167 06:43:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:19.167 06:43:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:19.167 06:43:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:19.167 06:43:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:20.102 Creating new GPT entries in memory. 00:03:20.102 The operation has completed successfully. 00:03:20.102 06:43:27 -- setup/common.sh@57 -- # (( part++ )) 00:03:20.102 06:43:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:20.102 06:43:27 -- setup/common.sh@62 -- # wait 2896540 00:03:20.102 06:43:27 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.102 06:43:27 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:20.102 06:43:27 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.102 06:43:27 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:20.102 06:43:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:20.102 06:43:27 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.102 06:43:27 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.102 06:43:27 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:20.102 06:43:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:20.102 06:43:27 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.102 06:43:27 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.102 06:43:27 -- setup/devices.sh@53 -- # local found=0 00:03:20.102 06:43:27 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:20.102 06:43:27 -- setup/devices.sh@56 -- # : 00:03:20.102 06:43:27 -- setup/devices.sh@59 -- # local pci status 00:03:20.102 06:43:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.102 06:43:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:20.102 06:43:27 -- setup/devices.sh@47 -- # setup output config 00:03:20.102 06:43:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.102 06:43:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:21.477 06:43:28 -- setup/devices.sh@63 -- # found=1 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.477 06:43:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:21.477 06:43:28 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:21.477 06:43:28 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.477 06:43:28 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:21.477 06:43:28 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:21.477 06:43:28 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:21.477 06:43:28 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.477 06:43:28 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.477 06:43:28 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:21.477 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:21.477 06:43:28 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:21.477 06:43:28 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:21.735 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:21.735 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:21.735 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:21.735 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:21.735 06:43:28 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:21.735 06:43:28 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:21.735 06:43:28 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.735 06:43:28 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:21.735 06:43:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:21.993 06:43:28 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.993 06:43:28 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:21.993 06:43:28 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:21.993 06:43:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:21.993 06:43:28 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.994 06:43:28 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:21.994 06:43:28 -- setup/devices.sh@53 -- # local found=0 00:03:21.994 06:43:28 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:21.994 06:43:28 -- setup/devices.sh@56 -- # : 00:03:21.994 06:43:28 -- setup/devices.sh@59 -- # local pci status 00:03:21.994 06:43:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.994 06:43:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:21.994 06:43:28 -- setup/devices.sh@47 -- # setup output config 00:03:21.994 06:43:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.994 06:43:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:22.929 06:43:29 -- setup/devices.sh@63 -- # found=1 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.929 06:43:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.929 06:43:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.930 06:43:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:22.930 06:43:30 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:22.930 06:43:30 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.930 06:43:30 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:22.930 06:43:30 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:22.930 06:43:30 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.930 06:43:30 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:22.930 06:43:30 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:22.930 06:43:30 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:22.930 06:43:30 -- setup/devices.sh@50 -- # local mount_point= 00:03:22.930 06:43:30 -- setup/devices.sh@51 -- # local test_file= 00:03:22.930 06:43:30 -- setup/devices.sh@53 -- # local found=0 00:03:22.930 06:43:30 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:22.930 06:43:30 -- setup/devices.sh@59 -- # local pci status 00:03:22.930 06:43:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.930 06:43:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:22.930 06:43:30 -- setup/devices.sh@47 -- # setup output config 00:03:22.930 06:43:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.930 06:43:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:24.307 06:43:31 -- setup/devices.sh@63 -- # found=1 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.307 06:43:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:24.307 06:43:31 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:24.307 06:43:31 -- setup/devices.sh@68 -- # return 0 00:03:24.307 06:43:31 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:24.307 06:43:31 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.307 06:43:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:24.307 06:43:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:24.307 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:24.307 00:03:24.307 real 0m6.156s 00:03:24.307 user 0m1.424s 00:03:24.307 sys 0m2.335s 00:03:24.307 06:43:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.307 06:43:31 -- common/autotest_common.sh@10 -- # set +x 00:03:24.307 ************************************ 00:03:24.307 END TEST nvme_mount 00:03:24.307 ************************************ 00:03:24.307 06:43:31 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:24.307 06:43:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:24.307 06:43:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:24.307 06:43:31 -- common/autotest_common.sh@10 -- # set +x 00:03:24.307 ************************************ 00:03:24.307 START TEST dm_mount 00:03:24.307 ************************************ 00:03:24.307 06:43:31 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:24.307 06:43:31 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:24.307 06:43:31 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:24.307 06:43:31 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:24.307 06:43:31 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:24.307 06:43:31 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:24.307 06:43:31 -- setup/common.sh@40 -- # local part_no=2 00:03:24.307 06:43:31 -- setup/common.sh@41 -- # local size=1073741824 00:03:24.307 06:43:31 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:24.307 06:43:31 -- setup/common.sh@44 -- # parts=() 00:03:24.307 06:43:31 -- setup/common.sh@44 -- # local parts 00:03:24.307 06:43:31 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:24.307 06:43:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:24.307 06:43:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:24.307 06:43:31 -- setup/common.sh@46 -- # (( part++ )) 00:03:24.307 06:43:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:24.307 06:43:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:24.307 06:43:31 -- setup/common.sh@46 -- # (( part++ )) 00:03:24.307 06:43:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:24.307 06:43:31 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:24.307 06:43:31 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:24.307 06:43:31 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:25.242 Creating new GPT entries in memory. 00:03:25.242 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:25.242 other utilities. 00:03:25.242 06:43:32 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:25.242 06:43:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:25.242 06:43:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:25.242 06:43:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:25.242 06:43:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:26.615 Creating new GPT entries in memory. 00:03:26.615 The operation has completed successfully. 00:03:26.615 06:43:33 -- setup/common.sh@57 -- # (( part++ )) 00:03:26.615 06:43:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:26.615 06:43:33 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:26.615 06:43:33 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:26.615 06:43:33 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:27.549 The operation has completed successfully. 00:03:27.549 06:43:34 -- setup/common.sh@57 -- # (( part++ )) 00:03:27.549 06:43:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:27.549 06:43:34 -- setup/common.sh@62 -- # wait 2898882 00:03:27.549 06:43:34 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:27.549 06:43:34 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.549 06:43:34 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:27.549 06:43:34 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:27.549 06:43:34 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:27.549 06:43:34 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:27.549 06:43:34 -- setup/devices.sh@161 -- # break 00:03:27.549 06:43:34 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:27.549 06:43:34 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:27.549 06:43:34 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:27.549 06:43:34 -- setup/devices.sh@166 -- # dm=dm-0 00:03:27.549 06:43:34 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:27.549 06:43:34 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:27.549 06:43:34 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.549 06:43:34 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:27.549 06:43:34 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.549 06:43:34 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:27.549 06:43:34 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:27.549 06:43:34 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.549 06:43:34 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:27.549 06:43:34 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:27.549 06:43:34 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:27.549 06:43:34 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.549 06:43:34 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:27.549 06:43:34 -- setup/devices.sh@53 -- # local found=0 00:03:27.549 06:43:34 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:27.549 06:43:34 -- setup/devices.sh@56 -- # : 00:03:27.549 06:43:34 -- setup/devices.sh@59 -- # local pci status 00:03:27.549 06:43:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.549 06:43:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:27.549 06:43:34 -- setup/devices.sh@47 -- # setup output config 00:03:27.549 06:43:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.549 06:43:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:28.531 06:43:35 -- setup/devices.sh@63 -- # found=1 00:03:28.531 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.531 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.531 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.531 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.531 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.531 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.531 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.531 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.531 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.531 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.531 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.531 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.532 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.532 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.532 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.532 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.532 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.532 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.532 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.532 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.532 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.532 06:43:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.532 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.791 06:43:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:28.791 06:43:35 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:28.791 06:43:35 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:28.791 06:43:35 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:28.791 06:43:35 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:28.791 06:43:35 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:28.791 06:43:35 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:28.791 06:43:35 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:28.791 06:43:35 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:28.791 06:43:35 -- setup/devices.sh@50 -- # local mount_point= 00:03:28.791 06:43:35 -- setup/devices.sh@51 -- # local test_file= 00:03:28.791 06:43:35 -- setup/devices.sh@53 -- # local found=0 00:03:28.791 06:43:35 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:28.791 06:43:35 -- setup/devices.sh@59 -- # local pci status 00:03:28.791 06:43:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.791 06:43:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:28.791 06:43:35 -- setup/devices.sh@47 -- # setup output config 00:03:28.791 06:43:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.791 06:43:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:29.727 06:43:36 -- setup/devices.sh@63 -- # found=1 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.727 06:43:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.727 06:43:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.987 06:43:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:29.987 06:43:37 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:29.987 06:43:37 -- setup/devices.sh@68 -- # return 0 00:03:29.987 06:43:37 -- setup/devices.sh@187 -- # cleanup_dm 00:03:29.987 06:43:37 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:29.987 06:43:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:29.987 06:43:37 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:29.987 06:43:37 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:29.987 06:43:37 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:29.987 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:29.987 06:43:37 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:29.987 06:43:37 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:29.987 00:03:29.987 real 0m5.743s 00:03:29.987 user 0m1.010s 00:03:29.987 sys 0m1.633s 00:03:29.987 06:43:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.987 06:43:37 -- common/autotest_common.sh@10 -- # set +x 00:03:29.987 ************************************ 00:03:29.987 END TEST dm_mount 00:03:29.987 ************************************ 00:03:29.987 06:43:37 -- setup/devices.sh@1 -- # cleanup 00:03:29.987 06:43:37 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:29.987 06:43:37 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.987 06:43:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:29.987 06:43:37 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:29.987 06:43:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:29.987 06:43:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:30.246 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:30.246 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:30.246 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:30.246 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:30.246 06:43:37 -- setup/devices.sh@12 -- # cleanup_dm 00:03:30.246 06:43:37 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.246 06:43:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:30.246 06:43:37 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:30.246 06:43:37 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:30.246 06:43:37 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:30.246 06:43:37 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:30.246 00:03:30.246 real 0m13.785s 00:03:30.246 user 0m3.091s 00:03:30.246 sys 0m4.971s 00:03:30.246 06:43:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.246 06:43:37 -- common/autotest_common.sh@10 -- # set +x 00:03:30.246 ************************************ 00:03:30.246 END TEST devices 00:03:30.246 ************************************ 00:03:30.504 00:03:30.504 real 0m42.136s 00:03:30.504 user 0m12.159s 00:03:30.504 sys 0m18.474s 00:03:30.504 06:43:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.504 06:43:37 -- common/autotest_common.sh@10 -- # set +x 00:03:30.504 ************************************ 00:03:30.504 END TEST setup.sh 00:03:30.504 ************************************ 00:03:30.504 06:43:37 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:31.442 Hugepages 00:03:31.442 node hugesize free / total 00:03:31.442 node0 1048576kB 0 / 0 00:03:31.442 node0 2048kB 2048 / 2048 00:03:31.442 node1 1048576kB 0 / 0 00:03:31.442 node1 2048kB 0 / 0 00:03:31.442 00:03:31.442 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.442 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:31.442 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:31.442 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:31.442 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:31.442 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:31.442 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:31.442 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:31.442 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:31.442 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:31.442 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:31.442 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:31.442 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:31.442 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:31.442 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:31.442 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:31.442 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:31.442 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:31.442 06:43:38 -- spdk/autotest.sh@141 -- # uname -s 00:03:31.442 06:43:38 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:31.442 06:43:38 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:31.442 06:43:38 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.822 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:32.822 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:32.822 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:32.822 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:32.822 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:32.822 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:32.822 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:32.822 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:32.822 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:32.822 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:32.822 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:32.822 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:32.822 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:32.822 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:32.822 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:32.822 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:33.762 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:33.762 06:43:40 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:34.700 06:43:41 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:34.700 06:43:41 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:34.700 06:43:41 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:03:34.700 06:43:41 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:03:34.700 06:43:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:34.700 06:43:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:34.700 06:43:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.700 06:43:41 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:34.700 06:43:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:34.700 06:43:41 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:34.700 06:43:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:34.700 06:43:41 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.083 Waiting for block devices as requested 00:03:36.083 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:36.083 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:36.083 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:36.083 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:36.343 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:36.343 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:36.343 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:36.343 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:36.343 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:36.603 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:36.603 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:36.603 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:36.862 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:36.862 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:36.862 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:36.862 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:37.122 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:37.122 06:43:44 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:37.122 06:43:44 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:37.122 06:43:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:37.122 06:43:44 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:37.122 06:43:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:37.122 06:43:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:37.122 06:43:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:37.122 06:43:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:37.122 06:43:44 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:03:37.122 06:43:44 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:03:37.122 06:43:44 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:03:37.122 06:43:44 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:37.122 06:43:44 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:37.122 06:43:44 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:03:37.122 06:43:44 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:37.122 06:43:44 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:37.122 06:43:44 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:03:37.122 06:43:44 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:37.122 06:43:44 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:37.122 06:43:44 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:37.122 06:43:44 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:37.122 06:43:44 -- common/autotest_common.sh@1542 -- # continue 00:03:37.122 06:43:44 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:37.122 06:43:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:37.122 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:03:37.122 06:43:44 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:37.122 06:43:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:37.122 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:03:37.122 06:43:44 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.499 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:38.499 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:38.499 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:38.499 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:38.499 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:38.499 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:38.499 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:38.499 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:38.499 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:38.499 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:38.499 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:38.499 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:38.499 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:38.499 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:38.499 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:38.499 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:39.437 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:39.437 06:43:46 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:39.437 06:43:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:39.437 06:43:46 -- common/autotest_common.sh@10 -- # set +x 00:03:39.437 06:43:46 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:39.437 06:43:46 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:39.437 06:43:46 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:39.437 06:43:46 -- common/autotest_common.sh@1562 -- # bdfs=() 00:03:39.437 06:43:46 -- common/autotest_common.sh@1562 -- # local bdfs 00:03:39.437 06:43:46 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:39.437 06:43:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:39.437 06:43:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:39.437 06:43:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:39.437 06:43:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:39.437 06:43:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:39.437 06:43:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:39.437 06:43:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:39.437 06:43:46 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:39.437 06:43:46 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:39.437 06:43:46 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:03:39.437 06:43:46 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:39.437 06:43:46 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:03:39.437 06:43:46 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:03:39.437 06:43:46 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:03:39.437 06:43:46 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2904300 00:03:39.437 06:43:46 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.437 06:43:46 -- common/autotest_common.sh@1583 -- # waitforlisten 2904300 00:03:39.437 06:43:46 -- common/autotest_common.sh@819 -- # '[' -z 2904300 ']' 00:03:39.437 06:43:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:39.437 06:43:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:39.437 06:43:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:39.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:39.437 06:43:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:39.437 06:43:46 -- common/autotest_common.sh@10 -- # set +x 00:03:39.697 [2024-05-12 06:43:46.583688] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:03:39.697 [2024-05-12 06:43:46.583792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904300 ] 00:03:39.697 EAL: No free 2048 kB hugepages reported on node 1 00:03:39.697 [2024-05-12 06:43:46.645094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.697 [2024-05-12 06:43:46.757880] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:39.697 [2024-05-12 06:43:46.758067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.635 06:43:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:40.635 06:43:47 -- common/autotest_common.sh@852 -- # return 0 00:03:40.635 06:43:47 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:40.635 06:43:47 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:40.635 06:43:47 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:43.931 nvme0n1 00:03:43.931 06:43:50 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:43.931 [2024-05-12 06:43:50.815049] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:43.931 [2024-05-12 06:43:50.815109] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:43.931 request: 00:03:43.931 { 00:03:43.932 "nvme_ctrlr_name": "nvme0", 00:03:43.932 "password": "test", 00:03:43.932 "method": "bdev_nvme_opal_revert", 00:03:43.932 "req_id": 1 00:03:43.932 } 00:03:43.932 Got JSON-RPC error response 00:03:43.932 response: 00:03:43.932 { 00:03:43.932 "code": -32603, 00:03:43.932 "message": "Internal error" 00:03:43.932 } 00:03:43.932 06:43:50 -- common/autotest_common.sh@1589 -- # true 00:03:43.932 06:43:50 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:43.932 06:43:50 -- common/autotest_common.sh@1593 -- # killprocess 2904300 00:03:43.932 06:43:50 -- common/autotest_common.sh@926 -- # '[' -z 2904300 ']' 00:03:43.932 06:43:50 -- common/autotest_common.sh@930 -- # kill -0 2904300 00:03:43.932 06:43:50 -- common/autotest_common.sh@931 -- # uname 00:03:43.932 06:43:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:03:43.932 06:43:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2904300 00:03:43.932 06:43:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:03:43.932 06:43:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:03:43.932 06:43:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2904300' 00:03:43.932 killing process with pid 2904300 00:03:43.932 06:43:50 -- common/autotest_common.sh@945 -- # kill 2904300 00:03:43.932 06:43:50 -- common/autotest_common.sh@950 -- # wait 2904300 00:03:45.857 06:43:52 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:03:45.857 06:43:52 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:03:45.857 06:43:52 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:45.857 06:43:52 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:45.857 06:43:52 -- spdk/autotest.sh@173 -- # timing_enter lib 00:03:45.857 06:43:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:45.857 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:03:45.857 06:43:52 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:45.857 06:43:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.857 06:43:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.857 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:03:45.857 ************************************ 00:03:45.857 START TEST env 00:03:45.857 ************************************ 00:03:45.857 06:43:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:45.857 * Looking for test storage... 00:03:45.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:45.857 06:43:52 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:45.857 06:43:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.857 06:43:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.857 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:03:45.857 ************************************ 00:03:45.857 START TEST env_memory 00:03:45.857 ************************************ 00:03:45.857 06:43:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:45.857 00:03:45.857 00:03:45.857 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.857 http://cunit.sourceforge.net/ 00:03:45.857 00:03:45.857 00:03:45.857 Suite: memory 00:03:45.857 Test: alloc and free memory map ...[2024-05-12 06:43:52.754479] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:45.857 passed 00:03:45.857 Test: mem map translation ...[2024-05-12 06:43:52.776784] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:45.857 [2024-05-12 06:43:52.776809] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:45.857 [2024-05-12 06:43:52.776856] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:45.857 [2024-05-12 06:43:52.776870] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:45.857 passed 00:03:45.857 Test: mem map registration ...[2024-05-12 06:43:52.820443] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:45.857 [2024-05-12 06:43:52.820464] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:45.857 passed 00:03:45.857 Test: mem map adjacent registrations ...passed 00:03:45.857 00:03:45.857 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.857 suites 1 1 n/a 0 0 00:03:45.857 tests 4 4 4 0 0 00:03:45.857 asserts 152 152 152 0 n/a 00:03:45.857 00:03:45.857 Elapsed time = 0.148 seconds 00:03:45.857 00:03:45.857 real 0m0.155s 00:03:45.857 user 0m0.148s 00:03:45.857 sys 0m0.006s 00:03:45.857 06:43:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.857 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:03:45.857 ************************************ 00:03:45.857 END TEST env_memory 00:03:45.857 ************************************ 00:03:45.857 06:43:52 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:45.857 06:43:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.857 06:43:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.857 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:03:45.857 ************************************ 00:03:45.857 START TEST env_vtophys 00:03:45.857 ************************************ 00:03:45.857 06:43:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:45.857 EAL: lib.eal log level changed from notice to debug 00:03:45.857 EAL: Detected lcore 0 as core 0 on socket 0 00:03:45.857 EAL: Detected lcore 1 as core 1 on socket 0 00:03:45.858 EAL: Detected lcore 2 as core 2 on socket 0 00:03:45.858 EAL: Detected lcore 3 as core 3 on socket 0 00:03:45.858 EAL: Detected lcore 4 as core 4 on socket 0 00:03:45.858 EAL: Detected lcore 5 as core 5 on socket 0 00:03:45.858 EAL: Detected lcore 6 as core 8 on socket 0 00:03:45.858 EAL: Detected lcore 7 as core 9 on socket 0 00:03:45.858 EAL: Detected lcore 8 as core 10 on socket 0 00:03:45.858 EAL: Detected lcore 9 as core 11 on socket 0 00:03:45.858 EAL: Detected lcore 10 as core 12 on socket 0 00:03:45.858 EAL: Detected lcore 11 as core 13 on socket 0 00:03:45.858 EAL: Detected lcore 12 as core 0 on socket 1 00:03:45.858 EAL: Detected lcore 13 as core 1 on socket 1 00:03:45.858 EAL: Detected lcore 14 as core 2 on socket 1 00:03:45.858 EAL: Detected lcore 15 as core 3 on socket 1 00:03:45.858 EAL: Detected lcore 16 as core 4 on socket 1 00:03:45.858 EAL: Detected lcore 17 as core 5 on socket 1 00:03:45.858 EAL: Detected lcore 18 as core 8 on socket 1 00:03:45.858 EAL: Detected lcore 19 as core 9 on socket 1 00:03:45.858 EAL: Detected lcore 20 as core 10 on socket 1 00:03:45.858 EAL: Detected lcore 21 as core 11 on socket 1 00:03:45.858 EAL: Detected lcore 22 as core 12 on socket 1 00:03:45.858 EAL: Detected lcore 23 as core 13 on socket 1 00:03:45.858 EAL: Detected lcore 24 as core 0 on socket 0 00:03:45.858 EAL: Detected lcore 25 as core 1 on socket 0 00:03:45.858 EAL: Detected lcore 26 as core 2 on socket 0 00:03:45.858 EAL: Detected lcore 27 as core 3 on socket 0 00:03:45.858 EAL: Detected lcore 28 as core 4 on socket 0 00:03:45.858 EAL: Detected lcore 29 as core 5 on socket 0 00:03:45.858 EAL: Detected lcore 30 as core 8 on socket 0 00:03:45.858 EAL: Detected lcore 31 as core 9 on socket 0 00:03:45.858 EAL: Detected lcore 32 as core 10 on socket 0 00:03:45.858 EAL: Detected lcore 33 as core 11 on socket 0 00:03:45.858 EAL: Detected lcore 34 as core 12 on socket 0 00:03:45.858 EAL: Detected lcore 35 as core 13 on socket 0 00:03:45.858 EAL: Detected lcore 36 as core 0 on socket 1 00:03:45.858 EAL: Detected lcore 37 as core 1 on socket 1 00:03:45.858 EAL: Detected lcore 38 as core 2 on socket 1 00:03:45.858 EAL: Detected lcore 39 as core 3 on socket 1 00:03:45.858 EAL: Detected lcore 40 as core 4 on socket 1 00:03:45.858 EAL: Detected lcore 41 as core 5 on socket 1 00:03:45.858 EAL: Detected lcore 42 as core 8 on socket 1 00:03:45.858 EAL: Detected lcore 43 as core 9 on socket 1 00:03:45.858 EAL: Detected lcore 44 as core 10 on socket 1 00:03:45.858 EAL: Detected lcore 45 as core 11 on socket 1 00:03:45.858 EAL: Detected lcore 46 as core 12 on socket 1 00:03:45.858 EAL: Detected lcore 47 as core 13 on socket 1 00:03:45.858 EAL: Maximum logical cores by configuration: 128 00:03:45.858 EAL: Detected CPU lcores: 48 00:03:45.858 EAL: Detected NUMA nodes: 2 00:03:45.858 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:45.858 EAL: Detected shared linkage of DPDK 00:03:45.858 EAL: No shared files mode enabled, IPC will be disabled 00:03:45.858 EAL: Bus pci wants IOVA as 'DC' 00:03:45.858 EAL: Buses did not request a specific IOVA mode. 00:03:45.858 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:45.858 EAL: Selected IOVA mode 'VA' 00:03:45.858 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.858 EAL: Probing VFIO support... 00:03:45.858 EAL: IOMMU type 1 (Type 1) is supported 00:03:45.858 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:45.858 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:45.858 EAL: VFIO support initialized 00:03:45.858 EAL: Ask a virtual area of 0x2e000 bytes 00:03:45.858 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:45.858 EAL: Setting up physically contiguous memory... 00:03:45.858 EAL: Setting maximum number of open files to 524288 00:03:45.858 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:45.858 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:45.858 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:45.858 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.858 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:45.858 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:45.858 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.858 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:45.858 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:45.858 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.858 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:45.858 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:45.858 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.858 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:45.858 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:45.858 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.858 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:45.858 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:45.858 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.858 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:45.858 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:45.858 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.858 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:45.858 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:45.858 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.858 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:45.858 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:45.858 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:45.858 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.858 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:45.858 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:45.858 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.858 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:45.858 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:45.858 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.858 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:45.858 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:45.858 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.858 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:45.858 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:45.858 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.858 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:45.858 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:45.858 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.858 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:45.858 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:45.858 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.858 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:45.858 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:45.858 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.858 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:45.858 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:45.858 EAL: Hugepages will be freed exactly as allocated. 00:03:45.858 EAL: No shared files mode enabled, IPC is disabled 00:03:45.858 EAL: No shared files mode enabled, IPC is disabled 00:03:45.858 EAL: TSC frequency is ~2700000 KHz 00:03:45.858 EAL: Main lcore 0 is ready (tid=7fe1a9534a00;cpuset=[0]) 00:03:45.858 EAL: Trying to obtain current memory policy. 00:03:45.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.858 EAL: Restoring previous memory policy: 0 00:03:45.858 EAL: request: mp_malloc_sync 00:03:45.858 EAL: No shared files mode enabled, IPC is disabled 00:03:45.858 EAL: Heap on socket 0 was expanded by 2MB 00:03:45.858 EAL: No shared files mode enabled, IPC is disabled 00:03:45.858 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:45.858 EAL: Mem event callback 'spdk:(nil)' registered 00:03:46.118 00:03:46.118 00:03:46.118 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.118 http://cunit.sourceforge.net/ 00:03:46.118 00:03:46.118 00:03:46.118 Suite: components_suite 00:03:46.118 Test: vtophys_malloc_test ...passed 00:03:46.118 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:46.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.118 EAL: Restoring previous memory policy: 4 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was expanded by 4MB 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was shrunk by 4MB 00:03:46.118 EAL: Trying to obtain current memory policy. 00:03:46.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.118 EAL: Restoring previous memory policy: 4 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was expanded by 6MB 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was shrunk by 6MB 00:03:46.118 EAL: Trying to obtain current memory policy. 00:03:46.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.118 EAL: Restoring previous memory policy: 4 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was expanded by 10MB 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was shrunk by 10MB 00:03:46.118 EAL: Trying to obtain current memory policy. 00:03:46.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.118 EAL: Restoring previous memory policy: 4 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was expanded by 18MB 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was shrunk by 18MB 00:03:46.118 EAL: Trying to obtain current memory policy. 00:03:46.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.118 EAL: Restoring previous memory policy: 4 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was expanded by 34MB 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was shrunk by 34MB 00:03:46.118 EAL: Trying to obtain current memory policy. 00:03:46.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.118 EAL: Restoring previous memory policy: 4 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was expanded by 66MB 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was shrunk by 66MB 00:03:46.118 EAL: Trying to obtain current memory policy. 00:03:46.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.118 EAL: Restoring previous memory policy: 4 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was expanded by 130MB 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was shrunk by 130MB 00:03:46.118 EAL: Trying to obtain current memory policy. 00:03:46.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.118 EAL: Restoring previous memory policy: 4 00:03:46.118 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.118 EAL: request: mp_malloc_sync 00:03:46.118 EAL: No shared files mode enabled, IPC is disabled 00:03:46.118 EAL: Heap on socket 0 was expanded by 258MB 00:03:46.378 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.378 EAL: request: mp_malloc_sync 00:03:46.378 EAL: No shared files mode enabled, IPC is disabled 00:03:46.378 EAL: Heap on socket 0 was shrunk by 258MB 00:03:46.378 EAL: Trying to obtain current memory policy. 00:03:46.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.378 EAL: Restoring previous memory policy: 4 00:03:46.378 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.378 EAL: request: mp_malloc_sync 00:03:46.378 EAL: No shared files mode enabled, IPC is disabled 00:03:46.378 EAL: Heap on socket 0 was expanded by 514MB 00:03:46.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.638 EAL: request: mp_malloc_sync 00:03:46.638 EAL: No shared files mode enabled, IPC is disabled 00:03:46.638 EAL: Heap on socket 0 was shrunk by 514MB 00:03:46.638 EAL: Trying to obtain current memory policy. 00:03:46.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.896 EAL: Restoring previous memory policy: 4 00:03:46.896 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.896 EAL: request: mp_malloc_sync 00:03:46.896 EAL: No shared files mode enabled, IPC is disabled 00:03:46.896 EAL: Heap on socket 0 was expanded by 1026MB 00:03:47.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.413 EAL: request: mp_malloc_sync 00:03:47.413 EAL: No shared files mode enabled, IPC is disabled 00:03:47.413 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:47.413 passed 00:03:47.413 00:03:47.413 Run Summary: Type Total Ran Passed Failed Inactive 00:03:47.413 suites 1 1 n/a 0 0 00:03:47.413 tests 2 2 2 0 0 00:03:47.413 asserts 497 497 497 0 n/a 00:03:47.413 00:03:47.413 Elapsed time = 1.347 seconds 00:03:47.413 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.413 EAL: request: mp_malloc_sync 00:03:47.413 EAL: No shared files mode enabled, IPC is disabled 00:03:47.413 EAL: Heap on socket 0 was shrunk by 2MB 00:03:47.413 EAL: No shared files mode enabled, IPC is disabled 00:03:47.413 EAL: No shared files mode enabled, IPC is disabled 00:03:47.413 EAL: No shared files mode enabled, IPC is disabled 00:03:47.413 00:03:47.413 real 0m1.463s 00:03:47.413 user 0m0.839s 00:03:47.413 sys 0m0.594s 00:03:47.413 06:43:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.413 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:03:47.413 ************************************ 00:03:47.413 END TEST env_vtophys 00:03:47.413 ************************************ 00:03:47.413 06:43:54 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:47.413 06:43:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:47.413 06:43:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:47.413 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:03:47.413 ************************************ 00:03:47.413 START TEST env_pci 00:03:47.413 ************************************ 00:03:47.413 06:43:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:47.413 00:03:47.413 00:03:47.413 CUnit - A unit testing framework for C - Version 2.1-3 00:03:47.413 http://cunit.sourceforge.net/ 00:03:47.413 00:03:47.413 00:03:47.413 Suite: pci 00:03:47.413 Test: pci_hook ...[2024-05-12 06:43:54.405919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2905335 has claimed it 00:03:47.413 EAL: Cannot find device (10000:00:01.0) 00:03:47.413 EAL: Failed to attach device on primary process 00:03:47.413 passed 00:03:47.413 00:03:47.413 Run Summary: Type Total Ran Passed Failed Inactive 00:03:47.413 suites 1 1 n/a 0 0 00:03:47.413 tests 1 1 1 0 0 00:03:47.413 asserts 25 25 25 0 n/a 00:03:47.413 00:03:47.413 Elapsed time = 0.021 seconds 00:03:47.413 00:03:47.413 real 0m0.034s 00:03:47.413 user 0m0.009s 00:03:47.413 sys 0m0.026s 00:03:47.413 06:43:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.413 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:03:47.413 ************************************ 00:03:47.413 END TEST env_pci 00:03:47.413 ************************************ 00:03:47.413 06:43:54 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:47.413 06:43:54 -- env/env.sh@15 -- # uname 00:03:47.413 06:43:54 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:47.413 06:43:54 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:47.413 06:43:54 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:47.413 06:43:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:03:47.413 06:43:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:47.413 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:03:47.413 ************************************ 00:03:47.413 START TEST env_dpdk_post_init 00:03:47.413 ************************************ 00:03:47.413 06:43:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:47.413 EAL: Detected CPU lcores: 48 00:03:47.413 EAL: Detected NUMA nodes: 2 00:03:47.413 EAL: Detected shared linkage of DPDK 00:03:47.413 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:47.413 EAL: Selected IOVA mode 'VA' 00:03:47.413 EAL: No free 2048 kB hugepages reported on node 1 00:03:47.413 EAL: VFIO support initialized 00:03:47.413 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:47.673 EAL: Using IOMMU type 1 (Type 1) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:47.673 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:48.608 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:51.951 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:51.951 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:51.951 Starting DPDK initialization... 00:03:51.951 Starting SPDK post initialization... 00:03:51.951 SPDK NVMe probe 00:03:51.951 Attaching to 0000:88:00.0 00:03:51.951 Attached to 0000:88:00.0 00:03:51.951 Cleaning up... 00:03:51.951 00:03:51.951 real 0m4.430s 00:03:51.951 user 0m3.299s 00:03:51.951 sys 0m0.191s 00:03:51.951 06:43:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.951 06:43:58 -- common/autotest_common.sh@10 -- # set +x 00:03:51.951 ************************************ 00:03:51.951 END TEST env_dpdk_post_init 00:03:51.951 ************************************ 00:03:51.951 06:43:58 -- env/env.sh@26 -- # uname 00:03:51.951 06:43:58 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:51.951 06:43:58 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:51.951 06:43:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.951 06:43:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.951 06:43:58 -- common/autotest_common.sh@10 -- # set +x 00:03:51.951 ************************************ 00:03:51.951 START TEST env_mem_callbacks 00:03:51.951 ************************************ 00:03:51.951 06:43:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:51.951 EAL: Detected CPU lcores: 48 00:03:51.951 EAL: Detected NUMA nodes: 2 00:03:51.951 EAL: Detected shared linkage of DPDK 00:03:51.951 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:51.951 EAL: Selected IOVA mode 'VA' 00:03:51.951 EAL: No free 2048 kB hugepages reported on node 1 00:03:51.951 EAL: VFIO support initialized 00:03:51.951 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:51.951 00:03:51.951 00:03:51.951 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.951 http://cunit.sourceforge.net/ 00:03:51.951 00:03:51.951 00:03:51.951 Suite: memory 00:03:51.951 Test: test ... 00:03:51.951 register 0x200000200000 2097152 00:03:51.951 malloc 3145728 00:03:51.951 register 0x200000400000 4194304 00:03:51.951 buf 0x200000500000 len 3145728 PASSED 00:03:51.951 malloc 64 00:03:51.951 buf 0x2000004fff40 len 64 PASSED 00:03:51.951 malloc 4194304 00:03:51.951 register 0x200000800000 6291456 00:03:51.951 buf 0x200000a00000 len 4194304 PASSED 00:03:51.951 free 0x200000500000 3145728 00:03:51.951 free 0x2000004fff40 64 00:03:51.951 unregister 0x200000400000 4194304 PASSED 00:03:51.951 free 0x200000a00000 4194304 00:03:51.951 unregister 0x200000800000 6291456 PASSED 00:03:51.951 malloc 8388608 00:03:51.951 register 0x200000400000 10485760 00:03:51.951 buf 0x200000600000 len 8388608 PASSED 00:03:51.951 free 0x200000600000 8388608 00:03:51.951 unregister 0x200000400000 10485760 PASSED 00:03:51.951 passed 00:03:51.951 00:03:51.951 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.951 suites 1 1 n/a 0 0 00:03:51.951 tests 1 1 1 0 0 00:03:51.951 asserts 15 15 15 0 n/a 00:03:51.951 00:03:51.951 Elapsed time = 0.005 seconds 00:03:51.951 00:03:51.951 real 0m0.049s 00:03:51.951 user 0m0.011s 00:03:51.951 sys 0m0.038s 00:03:51.951 06:43:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.951 06:43:58 -- common/autotest_common.sh@10 -- # set +x 00:03:51.951 ************************************ 00:03:51.951 END TEST env_mem_callbacks 00:03:51.951 ************************************ 00:03:51.951 00:03:51.951 real 0m6.315s 00:03:51.951 user 0m4.384s 00:03:51.951 sys 0m0.984s 00:03:51.951 06:43:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.951 06:43:58 -- common/autotest_common.sh@10 -- # set +x 00:03:51.951 ************************************ 00:03:51.951 END TEST env 00:03:51.951 ************************************ 00:03:51.951 06:43:59 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:51.951 06:43:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.951 06:43:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.952 06:43:59 -- common/autotest_common.sh@10 -- # set +x 00:03:51.952 ************************************ 00:03:51.952 START TEST rpc 00:03:51.952 ************************************ 00:03:51.952 06:43:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:51.952 * Looking for test storage... 00:03:51.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.952 06:43:59 -- rpc/rpc.sh@65 -- # spdk_pid=2905992 00:03:51.952 06:43:59 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:51.952 06:43:59 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:51.952 06:43:59 -- rpc/rpc.sh@67 -- # waitforlisten 2905992 00:03:51.952 06:43:59 -- common/autotest_common.sh@819 -- # '[' -z 2905992 ']' 00:03:51.952 06:43:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.952 06:43:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:51.952 06:43:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.952 06:43:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:51.952 06:43:59 -- common/autotest_common.sh@10 -- # set +x 00:03:52.211 [2024-05-12 06:43:59.113276] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:03:52.211 [2024-05-12 06:43:59.113360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905992 ] 00:03:52.211 EAL: No free 2048 kB hugepages reported on node 1 00:03:52.211 [2024-05-12 06:43:59.173581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.211 [2024-05-12 06:43:59.287212] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:52.211 [2024-05-12 06:43:59.287363] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:52.211 [2024-05-12 06:43:59.287384] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2905992' to capture a snapshot of events at runtime. 00:03:52.211 [2024-05-12 06:43:59.287403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2905992 for offline analysis/debug. 00:03:52.211 [2024-05-12 06:43:59.287444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.146 06:44:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:53.146 06:44:00 -- common/autotest_common.sh@852 -- # return 0 00:03:53.146 06:44:00 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:53.146 06:44:00 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:53.146 06:44:00 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:53.146 06:44:00 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:53.146 06:44:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.146 06:44:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.146 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.146 ************************************ 00:03:53.146 START TEST rpc_integrity 00:03:53.146 ************************************ 00:03:53.146 06:44:00 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:03:53.146 06:44:00 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:53.146 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.146 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.146 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.146 06:44:00 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:53.146 06:44:00 -- rpc/rpc.sh@13 -- # jq length 00:03:53.146 06:44:00 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:53.146 06:44:00 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:53.146 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.146 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.146 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.146 06:44:00 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:53.146 06:44:00 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:53.146 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.146 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.146 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.146 06:44:00 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:53.146 { 00:03:53.146 "name": "Malloc0", 00:03:53.146 "aliases": [ 00:03:53.146 "658b7a3a-9613-4a2a-8a84-4c84aa235774" 00:03:53.146 ], 00:03:53.146 "product_name": "Malloc disk", 00:03:53.146 "block_size": 512, 00:03:53.146 "num_blocks": 16384, 00:03:53.146 "uuid": "658b7a3a-9613-4a2a-8a84-4c84aa235774", 00:03:53.146 "assigned_rate_limits": { 00:03:53.146 "rw_ios_per_sec": 0, 00:03:53.146 "rw_mbytes_per_sec": 0, 00:03:53.146 "r_mbytes_per_sec": 0, 00:03:53.146 "w_mbytes_per_sec": 0 00:03:53.146 }, 00:03:53.146 "claimed": false, 00:03:53.146 "zoned": false, 00:03:53.146 "supported_io_types": { 00:03:53.146 "read": true, 00:03:53.146 "write": true, 00:03:53.146 "unmap": true, 00:03:53.146 "write_zeroes": true, 00:03:53.146 "flush": true, 00:03:53.146 "reset": true, 00:03:53.146 "compare": false, 00:03:53.146 "compare_and_write": false, 00:03:53.146 "abort": true, 00:03:53.146 "nvme_admin": false, 00:03:53.146 "nvme_io": false 00:03:53.146 }, 00:03:53.146 "memory_domains": [ 00:03:53.146 { 00:03:53.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.146 "dma_device_type": 2 00:03:53.146 } 00:03:53.146 ], 00:03:53.146 "driver_specific": {} 00:03:53.146 } 00:03:53.146 ]' 00:03:53.146 06:44:00 -- rpc/rpc.sh@17 -- # jq length 00:03:53.146 06:44:00 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:53.146 06:44:00 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:53.146 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.146 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.146 [2024-05-12 06:44:00.143744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:53.146 [2024-05-12 06:44:00.143786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:53.146 [2024-05-12 06:44:00.143822] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bf8f60 00:03:53.146 [2024-05-12 06:44:00.143836] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:53.146 [2024-05-12 06:44:00.145361] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:53.146 [2024-05-12 06:44:00.145389] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:53.146 Passthru0 00:03:53.146 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.146 06:44:00 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:53.146 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.146 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.146 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.146 06:44:00 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:53.146 { 00:03:53.146 "name": "Malloc0", 00:03:53.146 "aliases": [ 00:03:53.146 "658b7a3a-9613-4a2a-8a84-4c84aa235774" 00:03:53.146 ], 00:03:53.146 "product_name": "Malloc disk", 00:03:53.146 "block_size": 512, 00:03:53.146 "num_blocks": 16384, 00:03:53.146 "uuid": "658b7a3a-9613-4a2a-8a84-4c84aa235774", 00:03:53.146 "assigned_rate_limits": { 00:03:53.146 "rw_ios_per_sec": 0, 00:03:53.146 "rw_mbytes_per_sec": 0, 00:03:53.146 "r_mbytes_per_sec": 0, 00:03:53.146 "w_mbytes_per_sec": 0 00:03:53.146 }, 00:03:53.146 "claimed": true, 00:03:53.146 "claim_type": "exclusive_write", 00:03:53.146 "zoned": false, 00:03:53.146 "supported_io_types": { 00:03:53.146 "read": true, 00:03:53.146 "write": true, 00:03:53.146 "unmap": true, 00:03:53.146 "write_zeroes": true, 00:03:53.146 "flush": true, 00:03:53.146 "reset": true, 00:03:53.146 "compare": false, 00:03:53.146 "compare_and_write": false, 00:03:53.146 "abort": true, 00:03:53.146 "nvme_admin": false, 00:03:53.146 "nvme_io": false 00:03:53.146 }, 00:03:53.146 "memory_domains": [ 00:03:53.146 { 00:03:53.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.146 "dma_device_type": 2 00:03:53.146 } 00:03:53.146 ], 00:03:53.146 "driver_specific": {} 00:03:53.146 }, 00:03:53.146 { 00:03:53.146 "name": "Passthru0", 00:03:53.146 "aliases": [ 00:03:53.146 "e68da825-ecee-50e9-83e0-5a9591ad6849" 00:03:53.146 ], 00:03:53.146 "product_name": "passthru", 00:03:53.146 "block_size": 512, 00:03:53.146 "num_blocks": 16384, 00:03:53.146 "uuid": "e68da825-ecee-50e9-83e0-5a9591ad6849", 00:03:53.146 "assigned_rate_limits": { 00:03:53.146 "rw_ios_per_sec": 0, 00:03:53.146 "rw_mbytes_per_sec": 0, 00:03:53.146 "r_mbytes_per_sec": 0, 00:03:53.146 "w_mbytes_per_sec": 0 00:03:53.146 }, 00:03:53.146 "claimed": false, 00:03:53.146 "zoned": false, 00:03:53.146 "supported_io_types": { 00:03:53.146 "read": true, 00:03:53.146 "write": true, 00:03:53.146 "unmap": true, 00:03:53.146 "write_zeroes": true, 00:03:53.146 "flush": true, 00:03:53.146 "reset": true, 00:03:53.146 "compare": false, 00:03:53.146 "compare_and_write": false, 00:03:53.146 "abort": true, 00:03:53.146 "nvme_admin": false, 00:03:53.146 "nvme_io": false 00:03:53.146 }, 00:03:53.146 "memory_domains": [ 00:03:53.146 { 00:03:53.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.146 "dma_device_type": 2 00:03:53.146 } 00:03:53.146 ], 00:03:53.146 "driver_specific": { 00:03:53.146 "passthru": { 00:03:53.146 "name": "Passthru0", 00:03:53.146 "base_bdev_name": "Malloc0" 00:03:53.146 } 00:03:53.146 } 00:03:53.146 } 00:03:53.146 ]' 00:03:53.146 06:44:00 -- rpc/rpc.sh@21 -- # jq length 00:03:53.146 06:44:00 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:53.146 06:44:00 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:53.146 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.146 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.146 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.146 06:44:00 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:53.146 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.146 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.146 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.146 06:44:00 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:53.146 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.146 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.146 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.146 06:44:00 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:53.146 06:44:00 -- rpc/rpc.sh@26 -- # jq length 00:03:53.146 06:44:00 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:53.146 00:03:53.146 real 0m0.228s 00:03:53.146 user 0m0.148s 00:03:53.146 sys 0m0.018s 00:03:53.146 06:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.146 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.146 ************************************ 00:03:53.146 END TEST rpc_integrity 00:03:53.146 ************************************ 00:03:53.407 06:44:00 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:53.407 06:44:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.407 06:44:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.407 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.407 ************************************ 00:03:53.407 START TEST rpc_plugins 00:03:53.407 ************************************ 00:03:53.407 06:44:00 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:03:53.407 06:44:00 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:53.407 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.407 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.407 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.407 06:44:00 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:53.407 06:44:00 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:53.407 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.407 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.407 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.407 06:44:00 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:53.407 { 00:03:53.407 "name": "Malloc1", 00:03:53.407 "aliases": [ 00:03:53.407 "275d97d5-6c3f-4bbf-85c5-0059919f8f5b" 00:03:53.407 ], 00:03:53.407 "product_name": "Malloc disk", 00:03:53.407 "block_size": 4096, 00:03:53.407 "num_blocks": 256, 00:03:53.407 "uuid": "275d97d5-6c3f-4bbf-85c5-0059919f8f5b", 00:03:53.407 "assigned_rate_limits": { 00:03:53.407 "rw_ios_per_sec": 0, 00:03:53.407 "rw_mbytes_per_sec": 0, 00:03:53.407 "r_mbytes_per_sec": 0, 00:03:53.407 "w_mbytes_per_sec": 0 00:03:53.407 }, 00:03:53.407 "claimed": false, 00:03:53.407 "zoned": false, 00:03:53.407 "supported_io_types": { 00:03:53.407 "read": true, 00:03:53.407 "write": true, 00:03:53.407 "unmap": true, 00:03:53.407 "write_zeroes": true, 00:03:53.407 "flush": true, 00:03:53.407 "reset": true, 00:03:53.407 "compare": false, 00:03:53.407 "compare_and_write": false, 00:03:53.407 "abort": true, 00:03:53.407 "nvme_admin": false, 00:03:53.407 "nvme_io": false 00:03:53.407 }, 00:03:53.407 "memory_domains": [ 00:03:53.407 { 00:03:53.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.407 "dma_device_type": 2 00:03:53.407 } 00:03:53.407 ], 00:03:53.407 "driver_specific": {} 00:03:53.407 } 00:03:53.407 ]' 00:03:53.407 06:44:00 -- rpc/rpc.sh@32 -- # jq length 00:03:53.407 06:44:00 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:53.407 06:44:00 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:53.407 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.407 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.407 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.407 06:44:00 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:53.407 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.407 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.407 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.407 06:44:00 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:53.407 06:44:00 -- rpc/rpc.sh@36 -- # jq length 00:03:53.407 06:44:00 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:53.407 00:03:53.407 real 0m0.108s 00:03:53.407 user 0m0.073s 00:03:53.407 sys 0m0.008s 00:03:53.407 06:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.407 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.407 ************************************ 00:03:53.407 END TEST rpc_plugins 00:03:53.407 ************************************ 00:03:53.407 06:44:00 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:53.407 06:44:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.407 06:44:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.407 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.407 ************************************ 00:03:53.407 START TEST rpc_trace_cmd_test 00:03:53.407 ************************************ 00:03:53.407 06:44:00 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:03:53.407 06:44:00 -- rpc/rpc.sh@40 -- # local info 00:03:53.407 06:44:00 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:53.407 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.407 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.407 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.407 06:44:00 -- rpc/rpc.sh@42 -- # info='{ 00:03:53.407 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2905992", 00:03:53.407 "tpoint_group_mask": "0x8", 00:03:53.407 "iscsi_conn": { 00:03:53.407 "mask": "0x2", 00:03:53.407 "tpoint_mask": "0x0" 00:03:53.407 }, 00:03:53.407 "scsi": { 00:03:53.407 "mask": "0x4", 00:03:53.407 "tpoint_mask": "0x0" 00:03:53.407 }, 00:03:53.407 "bdev": { 00:03:53.407 "mask": "0x8", 00:03:53.407 "tpoint_mask": "0xffffffffffffffff" 00:03:53.407 }, 00:03:53.407 "nvmf_rdma": { 00:03:53.407 "mask": "0x10", 00:03:53.407 "tpoint_mask": "0x0" 00:03:53.407 }, 00:03:53.407 "nvmf_tcp": { 00:03:53.407 "mask": "0x20", 00:03:53.407 "tpoint_mask": "0x0" 00:03:53.407 }, 00:03:53.407 "ftl": { 00:03:53.407 "mask": "0x40", 00:03:53.407 "tpoint_mask": "0x0" 00:03:53.407 }, 00:03:53.408 "blobfs": { 00:03:53.408 "mask": "0x80", 00:03:53.408 "tpoint_mask": "0x0" 00:03:53.408 }, 00:03:53.408 "dsa": { 00:03:53.408 "mask": "0x200", 00:03:53.408 "tpoint_mask": "0x0" 00:03:53.408 }, 00:03:53.408 "thread": { 00:03:53.408 "mask": "0x400", 00:03:53.408 "tpoint_mask": "0x0" 00:03:53.408 }, 00:03:53.408 "nvme_pcie": { 00:03:53.408 "mask": "0x800", 00:03:53.408 "tpoint_mask": "0x0" 00:03:53.408 }, 00:03:53.408 "iaa": { 00:03:53.408 "mask": "0x1000", 00:03:53.408 "tpoint_mask": "0x0" 00:03:53.408 }, 00:03:53.408 "nvme_tcp": { 00:03:53.408 "mask": "0x2000", 00:03:53.408 "tpoint_mask": "0x0" 00:03:53.408 }, 00:03:53.408 "bdev_nvme": { 00:03:53.408 "mask": "0x4000", 00:03:53.408 "tpoint_mask": "0x0" 00:03:53.408 } 00:03:53.408 }' 00:03:53.408 06:44:00 -- rpc/rpc.sh@43 -- # jq length 00:03:53.408 06:44:00 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:53.408 06:44:00 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:53.408 06:44:00 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:53.408 06:44:00 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:53.666 06:44:00 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:53.666 06:44:00 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:53.666 06:44:00 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:53.666 06:44:00 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:53.666 06:44:00 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:53.666 00:03:53.666 real 0m0.195s 00:03:53.666 user 0m0.173s 00:03:53.666 sys 0m0.013s 00:03:53.666 06:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.666 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.666 ************************************ 00:03:53.666 END TEST rpc_trace_cmd_test 00:03:53.666 ************************************ 00:03:53.666 06:44:00 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:53.666 06:44:00 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:53.666 06:44:00 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:53.666 06:44:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.666 06:44:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.666 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.666 ************************************ 00:03:53.666 START TEST rpc_daemon_integrity 00:03:53.666 ************************************ 00:03:53.666 06:44:00 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:03:53.666 06:44:00 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:53.666 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.666 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.666 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.666 06:44:00 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:53.666 06:44:00 -- rpc/rpc.sh@13 -- # jq length 00:03:53.666 06:44:00 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:53.666 06:44:00 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:53.666 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.666 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.666 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.666 06:44:00 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:53.666 06:44:00 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:53.666 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.666 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.666 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.666 06:44:00 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:53.666 { 00:03:53.666 "name": "Malloc2", 00:03:53.666 "aliases": [ 00:03:53.666 "ed5aa133-702d-45a2-9d42-1228807d8fab" 00:03:53.666 ], 00:03:53.666 "product_name": "Malloc disk", 00:03:53.666 "block_size": 512, 00:03:53.666 "num_blocks": 16384, 00:03:53.666 "uuid": "ed5aa133-702d-45a2-9d42-1228807d8fab", 00:03:53.666 "assigned_rate_limits": { 00:03:53.666 "rw_ios_per_sec": 0, 00:03:53.666 "rw_mbytes_per_sec": 0, 00:03:53.666 "r_mbytes_per_sec": 0, 00:03:53.666 "w_mbytes_per_sec": 0 00:03:53.666 }, 00:03:53.666 "claimed": false, 00:03:53.666 "zoned": false, 00:03:53.666 "supported_io_types": { 00:03:53.667 "read": true, 00:03:53.667 "write": true, 00:03:53.667 "unmap": true, 00:03:53.667 "write_zeroes": true, 00:03:53.667 "flush": true, 00:03:53.667 "reset": true, 00:03:53.667 "compare": false, 00:03:53.667 "compare_and_write": false, 00:03:53.667 "abort": true, 00:03:53.667 "nvme_admin": false, 00:03:53.667 "nvme_io": false 00:03:53.667 }, 00:03:53.667 "memory_domains": [ 00:03:53.667 { 00:03:53.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.667 "dma_device_type": 2 00:03:53.667 } 00:03:53.667 ], 00:03:53.667 "driver_specific": {} 00:03:53.667 } 00:03:53.667 ]' 00:03:53.667 06:44:00 -- rpc/rpc.sh@17 -- # jq length 00:03:53.667 06:44:00 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:53.667 06:44:00 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:53.667 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.667 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.667 [2024-05-12 06:44:00.749509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:53.667 [2024-05-12 06:44:00.749552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:53.667 [2024-05-12 06:44:00.749580] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bf9990 00:03:53.667 [2024-05-12 06:44:00.749597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:53.667 [2024-05-12 06:44:00.750961] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:53.667 [2024-05-12 06:44:00.751000] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:53.667 Passthru0 00:03:53.667 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.667 06:44:00 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:53.667 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.667 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.667 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.667 06:44:00 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:53.667 { 00:03:53.667 "name": "Malloc2", 00:03:53.667 "aliases": [ 00:03:53.667 "ed5aa133-702d-45a2-9d42-1228807d8fab" 00:03:53.667 ], 00:03:53.667 "product_name": "Malloc disk", 00:03:53.667 "block_size": 512, 00:03:53.667 "num_blocks": 16384, 00:03:53.667 "uuid": "ed5aa133-702d-45a2-9d42-1228807d8fab", 00:03:53.667 "assigned_rate_limits": { 00:03:53.667 "rw_ios_per_sec": 0, 00:03:53.667 "rw_mbytes_per_sec": 0, 00:03:53.667 "r_mbytes_per_sec": 0, 00:03:53.667 "w_mbytes_per_sec": 0 00:03:53.667 }, 00:03:53.667 "claimed": true, 00:03:53.667 "claim_type": "exclusive_write", 00:03:53.667 "zoned": false, 00:03:53.667 "supported_io_types": { 00:03:53.667 "read": true, 00:03:53.667 "write": true, 00:03:53.667 "unmap": true, 00:03:53.667 "write_zeroes": true, 00:03:53.667 "flush": true, 00:03:53.667 "reset": true, 00:03:53.667 "compare": false, 00:03:53.667 "compare_and_write": false, 00:03:53.667 "abort": true, 00:03:53.667 "nvme_admin": false, 00:03:53.667 "nvme_io": false 00:03:53.667 }, 00:03:53.667 "memory_domains": [ 00:03:53.667 { 00:03:53.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.667 "dma_device_type": 2 00:03:53.667 } 00:03:53.667 ], 00:03:53.667 "driver_specific": {} 00:03:53.667 }, 00:03:53.667 { 00:03:53.667 "name": "Passthru0", 00:03:53.667 "aliases": [ 00:03:53.667 "b9f9c289-c9b5-50d7-894d-b4992fc0f28c" 00:03:53.667 ], 00:03:53.667 "product_name": "passthru", 00:03:53.667 "block_size": 512, 00:03:53.667 "num_blocks": 16384, 00:03:53.667 "uuid": "b9f9c289-c9b5-50d7-894d-b4992fc0f28c", 00:03:53.667 "assigned_rate_limits": { 00:03:53.667 "rw_ios_per_sec": 0, 00:03:53.667 "rw_mbytes_per_sec": 0, 00:03:53.667 "r_mbytes_per_sec": 0, 00:03:53.667 "w_mbytes_per_sec": 0 00:03:53.667 }, 00:03:53.667 "claimed": false, 00:03:53.667 "zoned": false, 00:03:53.667 "supported_io_types": { 00:03:53.667 "read": true, 00:03:53.667 "write": true, 00:03:53.667 "unmap": true, 00:03:53.667 "write_zeroes": true, 00:03:53.667 "flush": true, 00:03:53.667 "reset": true, 00:03:53.667 "compare": false, 00:03:53.667 "compare_and_write": false, 00:03:53.667 "abort": true, 00:03:53.667 "nvme_admin": false, 00:03:53.667 "nvme_io": false 00:03:53.667 }, 00:03:53.667 "memory_domains": [ 00:03:53.667 { 00:03:53.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.667 "dma_device_type": 2 00:03:53.667 } 00:03:53.667 ], 00:03:53.667 "driver_specific": { 00:03:53.667 "passthru": { 00:03:53.667 "name": "Passthru0", 00:03:53.667 "base_bdev_name": "Malloc2" 00:03:53.667 } 00:03:53.667 } 00:03:53.667 } 00:03:53.667 ]' 00:03:53.667 06:44:00 -- rpc/rpc.sh@21 -- # jq length 00:03:53.926 06:44:00 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:53.926 06:44:00 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:53.926 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.926 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.926 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.926 06:44:00 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:53.926 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.926 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.926 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.926 06:44:00 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:53.926 06:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:53.926 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.926 06:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:53.926 06:44:00 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:53.926 06:44:00 -- rpc/rpc.sh@26 -- # jq length 00:03:53.926 06:44:00 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:53.926 00:03:53.926 real 0m0.218s 00:03:53.926 user 0m0.150s 00:03:53.926 sys 0m0.017s 00:03:53.926 06:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.926 06:44:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.926 ************************************ 00:03:53.926 END TEST rpc_daemon_integrity 00:03:53.926 ************************************ 00:03:53.926 06:44:00 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:53.926 06:44:00 -- rpc/rpc.sh@84 -- # killprocess 2905992 00:03:53.926 06:44:00 -- common/autotest_common.sh@926 -- # '[' -z 2905992 ']' 00:03:53.926 06:44:00 -- common/autotest_common.sh@930 -- # kill -0 2905992 00:03:53.926 06:44:00 -- common/autotest_common.sh@931 -- # uname 00:03:53.926 06:44:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:03:53.926 06:44:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2905992 00:03:53.926 06:44:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:03:53.926 06:44:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:03:53.926 06:44:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2905992' 00:03:53.926 killing process with pid 2905992 00:03:53.926 06:44:00 -- common/autotest_common.sh@945 -- # kill 2905992 00:03:53.926 06:44:00 -- common/autotest_common.sh@950 -- # wait 2905992 00:03:54.494 00:03:54.494 real 0m2.353s 00:03:54.494 user 0m2.990s 00:03:54.494 sys 0m0.555s 00:03:54.494 06:44:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.494 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:03:54.494 ************************************ 00:03:54.494 END TEST rpc 00:03:54.494 ************************************ 00:03:54.494 06:44:01 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:54.494 06:44:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:54.494 06:44:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:54.494 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:03:54.494 ************************************ 00:03:54.494 START TEST rpc_client 00:03:54.494 ************************************ 00:03:54.494 06:44:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:54.494 * Looking for test storage... 00:03:54.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:54.494 06:44:01 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:54.494 OK 00:03:54.494 06:44:01 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:54.494 00:03:54.494 real 0m0.065s 00:03:54.494 user 0m0.036s 00:03:54.494 sys 0m0.034s 00:03:54.494 06:44:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.494 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:03:54.494 ************************************ 00:03:54.494 END TEST rpc_client 00:03:54.494 ************************************ 00:03:54.494 06:44:01 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:54.494 06:44:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:54.494 06:44:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:54.494 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:03:54.494 ************************************ 00:03:54.495 START TEST json_config 00:03:54.495 ************************************ 00:03:54.495 06:44:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:54.495 06:44:01 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:54.495 06:44:01 -- nvmf/common.sh@7 -- # uname -s 00:03:54.495 06:44:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:54.495 06:44:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:54.495 06:44:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:54.495 06:44:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:54.495 06:44:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:54.495 06:44:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:54.495 06:44:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:54.495 06:44:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:54.495 06:44:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:54.495 06:44:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:54.495 06:44:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:54.495 06:44:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:54.495 06:44:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:54.495 06:44:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:54.495 06:44:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:54.495 06:44:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:54.495 06:44:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:54.495 06:44:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.495 06:44:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.495 06:44:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.495 06:44:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.495 06:44:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.495 06:44:01 -- paths/export.sh@5 -- # export PATH 00:03:54.495 06:44:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.495 06:44:01 -- nvmf/common.sh@46 -- # : 0 00:03:54.495 06:44:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:54.495 06:44:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:54.495 06:44:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:54.495 06:44:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:54.495 06:44:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:54.495 06:44:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:54.495 06:44:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:54.495 06:44:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:54.495 06:44:01 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:03:54.495 06:44:01 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:03:54.495 06:44:01 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:03:54.495 06:44:01 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:54.495 06:44:01 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:03:54.495 06:44:01 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:03:54.495 06:44:01 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:54.495 06:44:01 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:03:54.495 06:44:01 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:54.495 06:44:01 -- json_config/json_config.sh@32 -- # declare -A app_params 00:03:54.495 06:44:01 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:54.495 06:44:01 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:03:54.495 06:44:01 -- json_config/json_config.sh@43 -- # last_event_id=0 00:03:54.495 06:44:01 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:54.495 06:44:01 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:03:54.495 INFO: JSON configuration test init 00:03:54.495 06:44:01 -- json_config/json_config.sh@420 -- # json_config_test_init 00:03:54.495 06:44:01 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:03:54.495 06:44:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:54.495 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:03:54.495 06:44:01 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:03:54.495 06:44:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:54.495 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:03:54.495 06:44:01 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:03:54.495 06:44:01 -- json_config/json_config.sh@98 -- # local app=target 00:03:54.495 06:44:01 -- json_config/json_config.sh@99 -- # shift 00:03:54.495 06:44:01 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:54.495 06:44:01 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:54.495 06:44:01 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:54.495 06:44:01 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:54.495 06:44:01 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:54.495 06:44:01 -- json_config/json_config.sh@111 -- # app_pid[$app]=2906474 00:03:54.495 06:44:01 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:54.495 06:44:01 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:54.495 Waiting for target to run... 00:03:54.495 06:44:01 -- json_config/json_config.sh@114 -- # waitforlisten 2906474 /var/tmp/spdk_tgt.sock 00:03:54.495 06:44:01 -- common/autotest_common.sh@819 -- # '[' -z 2906474 ']' 00:03:54.495 06:44:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:54.495 06:44:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:54.495 06:44:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:54.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:54.495 06:44:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:54.495 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:03:54.495 [2024-05-12 06:44:01.579222] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:03:54.495 [2024-05-12 06:44:01.579307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906474 ] 00:03:54.495 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.059 [2024-05-12 06:44:01.916743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.059 [2024-05-12 06:44:02.003718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:55.059 [2024-05-12 06:44:02.003904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.624 06:44:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:55.624 06:44:02 -- common/autotest_common.sh@852 -- # return 0 00:03:55.624 06:44:02 -- json_config/json_config.sh@115 -- # echo '' 00:03:55.624 00:03:55.624 06:44:02 -- json_config/json_config.sh@322 -- # create_accel_config 00:03:55.624 06:44:02 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:03:55.624 06:44:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:55.624 06:44:02 -- common/autotest_common.sh@10 -- # set +x 00:03:55.624 06:44:02 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:03:55.624 06:44:02 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:03:55.624 06:44:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:55.624 06:44:02 -- common/autotest_common.sh@10 -- # set +x 00:03:55.624 06:44:02 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:55.624 06:44:02 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:03:55.624 06:44:02 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:58.914 06:44:05 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:03:58.914 06:44:05 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:03:58.914 06:44:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:58.914 06:44:05 -- common/autotest_common.sh@10 -- # set +x 00:03:58.914 06:44:05 -- json_config/json_config.sh@48 -- # local ret=0 00:03:58.914 06:44:05 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:58.914 06:44:05 -- json_config/json_config.sh@49 -- # local enabled_types 00:03:58.914 06:44:05 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:58.914 06:44:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:58.914 06:44:05 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:58.914 06:44:05 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:58.914 06:44:05 -- json_config/json_config.sh@51 -- # local get_types 00:03:58.914 06:44:05 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:58.914 06:44:05 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:03:58.914 06:44:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:58.914 06:44:05 -- common/autotest_common.sh@10 -- # set +x 00:03:58.914 06:44:05 -- json_config/json_config.sh@58 -- # return 0 00:03:58.914 06:44:05 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:03:58.914 06:44:05 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:03:58.914 06:44:05 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:03:58.914 06:44:05 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:03:58.914 06:44:05 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:03:58.914 06:44:05 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:03:58.914 06:44:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:58.914 06:44:05 -- common/autotest_common.sh@10 -- # set +x 00:03:58.915 06:44:05 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:58.915 06:44:05 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:03:58.915 06:44:05 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:03:58.915 06:44:05 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:58.915 06:44:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:59.172 MallocForNvmf0 00:03:59.172 06:44:06 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:59.172 06:44:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:59.431 MallocForNvmf1 00:03:59.431 06:44:06 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:59.431 06:44:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:59.689 [2024-05-12 06:44:06.638212] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:59.689 06:44:06 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:59.689 06:44:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:59.947 06:44:06 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:59.947 06:44:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:00.204 06:44:07 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:00.204 06:44:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:00.462 06:44:07 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:00.462 06:44:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:00.462 [2024-05-12 06:44:07.585334] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:00.720 06:44:07 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:00.720 06:44:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:00.720 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:04:00.720 06:44:07 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:00.720 06:44:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:00.720 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:04:00.720 06:44:07 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:00.720 06:44:07 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:00.720 06:44:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:00.978 MallocBdevForConfigChangeCheck 00:04:00.978 06:44:07 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:00.978 06:44:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:00.978 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:04:00.978 06:44:07 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:00.978 06:44:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:01.238 06:44:08 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:01.238 INFO: shutting down applications... 00:04:01.238 06:44:08 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:01.238 06:44:08 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:01.238 06:44:08 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:01.238 06:44:08 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:03.143 Calling clear_iscsi_subsystem 00:04:03.143 Calling clear_nvmf_subsystem 00:04:03.143 Calling clear_nbd_subsystem 00:04:03.143 Calling clear_ublk_subsystem 00:04:03.143 Calling clear_vhost_blk_subsystem 00:04:03.143 Calling clear_vhost_scsi_subsystem 00:04:03.143 Calling clear_scheduler_subsystem 00:04:03.143 Calling clear_bdev_subsystem 00:04:03.143 Calling clear_accel_subsystem 00:04:03.143 Calling clear_vmd_subsystem 00:04:03.143 Calling clear_sock_subsystem 00:04:03.143 Calling clear_iobuf_subsystem 00:04:03.143 06:44:09 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:03.143 06:44:09 -- json_config/json_config.sh@396 -- # count=100 00:04:03.143 06:44:09 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:03.143 06:44:09 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.143 06:44:09 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:03.143 06:44:09 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:03.403 06:44:10 -- json_config/json_config.sh@398 -- # break 00:04:03.403 06:44:10 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:03.403 06:44:10 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:03.403 06:44:10 -- json_config/json_config.sh@120 -- # local app=target 00:04:03.403 06:44:10 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:03.403 06:44:10 -- json_config/json_config.sh@124 -- # [[ -n 2906474 ]] 00:04:03.403 06:44:10 -- json_config/json_config.sh@127 -- # kill -SIGINT 2906474 00:04:03.403 06:44:10 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:03.403 06:44:10 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:03.403 06:44:10 -- json_config/json_config.sh@130 -- # kill -0 2906474 00:04:03.403 06:44:10 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:03.664 06:44:10 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:03.664 06:44:10 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:03.664 06:44:10 -- json_config/json_config.sh@130 -- # kill -0 2906474 00:04:03.664 06:44:10 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:03.664 06:44:10 -- json_config/json_config.sh@132 -- # break 00:04:03.664 06:44:10 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:03.664 06:44:10 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:03.664 SPDK target shutdown done 00:04:03.664 06:44:10 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:03.664 INFO: relaunching applications... 00:04:03.664 06:44:10 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.664 06:44:10 -- json_config/json_config.sh@98 -- # local app=target 00:04:03.664 06:44:10 -- json_config/json_config.sh@99 -- # shift 00:04:03.664 06:44:10 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:03.664 06:44:10 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:03.664 06:44:10 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:03.664 06:44:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:03.664 06:44:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:03.664 06:44:10 -- json_config/json_config.sh@111 -- # app_pid[$app]=2907698 00:04:03.664 06:44:10 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.664 06:44:10 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:03.664 Waiting for target to run... 00:04:03.664 06:44:10 -- json_config/json_config.sh@114 -- # waitforlisten 2907698 /var/tmp/spdk_tgt.sock 00:04:03.664 06:44:10 -- common/autotest_common.sh@819 -- # '[' -z 2907698 ']' 00:04:03.664 06:44:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:03.664 06:44:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:03.664 06:44:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:03.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:03.664 06:44:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:03.664 06:44:10 -- common/autotest_common.sh@10 -- # set +x 00:04:03.924 [2024-05-12 06:44:10.833254] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:03.924 [2024-05-12 06:44:10.833345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907698 ] 00:04:03.924 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.185 [2024-05-12 06:44:11.187268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.185 [2024-05-12 06:44:11.277255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:04.185 [2024-05-12 06:44:11.277434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.497 [2024-05-12 06:44:14.309623] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.497 [2024-05-12 06:44:14.342095] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:07.767 06:44:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:07.767 06:44:14 -- common/autotest_common.sh@852 -- # return 0 00:04:07.767 06:44:14 -- json_config/json_config.sh@115 -- # echo '' 00:04:07.767 00:04:07.767 06:44:14 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:07.767 06:44:14 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:07.767 INFO: Checking if target configuration is the same... 00:04:07.767 06:44:14 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.767 06:44:14 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:07.767 06:44:14 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:07.767 + '[' 2 -ne 2 ']' 00:04:07.767 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:07.767 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:07.767 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.767 +++ basename /dev/fd/62 00:04:07.767 ++ mktemp /tmp/62.XXX 00:04:07.767 + tmp_file_1=/tmp/62.RWE 00:04:07.767 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.767 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:07.767 + tmp_file_2=/tmp/spdk_tgt_config.json.DNY 00:04:07.767 + ret=0 00:04:07.768 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.027 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.027 + diff -u /tmp/62.RWE /tmp/spdk_tgt_config.json.DNY 00:04:08.027 + echo 'INFO: JSON config files are the same' 00:04:08.027 INFO: JSON config files are the same 00:04:08.027 + rm /tmp/62.RWE /tmp/spdk_tgt_config.json.DNY 00:04:08.027 + exit 0 00:04:08.027 06:44:15 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:08.027 06:44:15 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:08.027 INFO: changing configuration and checking if this can be detected... 00:04:08.027 06:44:15 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:08.027 06:44:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:08.284 06:44:15 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.284 06:44:15 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:08.284 06:44:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.284 + '[' 2 -ne 2 ']' 00:04:08.284 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:08.284 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:08.284 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.284 +++ basename /dev/fd/62 00:04:08.284 ++ mktemp /tmp/62.XXX 00:04:08.284 + tmp_file_1=/tmp/62.oJT 00:04:08.284 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.284 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:08.284 + tmp_file_2=/tmp/spdk_tgt_config.json.r1F 00:04:08.284 + ret=0 00:04:08.284 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.852 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.852 + diff -u /tmp/62.oJT /tmp/spdk_tgt_config.json.r1F 00:04:08.852 + ret=1 00:04:08.852 + echo '=== Start of file: /tmp/62.oJT ===' 00:04:08.852 + cat /tmp/62.oJT 00:04:08.852 + echo '=== End of file: /tmp/62.oJT ===' 00:04:08.852 + echo '' 00:04:08.852 + echo '=== Start of file: /tmp/spdk_tgt_config.json.r1F ===' 00:04:08.852 + cat /tmp/spdk_tgt_config.json.r1F 00:04:08.852 + echo '=== End of file: /tmp/spdk_tgt_config.json.r1F ===' 00:04:08.852 + echo '' 00:04:08.852 + rm /tmp/62.oJT /tmp/spdk_tgt_config.json.r1F 00:04:08.852 + exit 1 00:04:08.852 06:44:15 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:08.852 INFO: configuration change detected. 00:04:08.852 06:44:15 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:08.852 06:44:15 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:08.852 06:44:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:08.852 06:44:15 -- common/autotest_common.sh@10 -- # set +x 00:04:08.852 06:44:15 -- json_config/json_config.sh@360 -- # local ret=0 00:04:08.852 06:44:15 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:08.852 06:44:15 -- json_config/json_config.sh@370 -- # [[ -n 2907698 ]] 00:04:08.852 06:44:15 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:08.852 06:44:15 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:08.852 06:44:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:08.852 06:44:15 -- common/autotest_common.sh@10 -- # set +x 00:04:08.852 06:44:15 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:08.852 06:44:15 -- json_config/json_config.sh@246 -- # uname -s 00:04:08.852 06:44:15 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:08.852 06:44:15 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:08.852 06:44:15 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:08.852 06:44:15 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:08.852 06:44:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:08.852 06:44:15 -- common/autotest_common.sh@10 -- # set +x 00:04:08.852 06:44:15 -- json_config/json_config.sh@376 -- # killprocess 2907698 00:04:08.852 06:44:15 -- common/autotest_common.sh@926 -- # '[' -z 2907698 ']' 00:04:08.852 06:44:15 -- common/autotest_common.sh@930 -- # kill -0 2907698 00:04:08.852 06:44:15 -- common/autotest_common.sh@931 -- # uname 00:04:08.852 06:44:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:08.852 06:44:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2907698 00:04:08.852 06:44:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:08.852 06:44:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:08.852 06:44:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2907698' 00:04:08.852 killing process with pid 2907698 00:04:08.852 06:44:15 -- common/autotest_common.sh@945 -- # kill 2907698 00:04:08.852 06:44:15 -- common/autotest_common.sh@950 -- # wait 2907698 00:04:10.756 06:44:17 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.756 06:44:17 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:10.756 06:44:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:10.756 06:44:17 -- common/autotest_common.sh@10 -- # set +x 00:04:10.756 06:44:17 -- json_config/json_config.sh@381 -- # return 0 00:04:10.756 06:44:17 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:10.756 INFO: Success 00:04:10.756 00:04:10.756 real 0m16.027s 00:04:10.756 user 0m18.520s 00:04:10.756 sys 0m1.876s 00:04:10.757 06:44:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.757 06:44:17 -- common/autotest_common.sh@10 -- # set +x 00:04:10.757 ************************************ 00:04:10.757 END TEST json_config 00:04:10.757 ************************************ 00:04:10.757 06:44:17 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:10.757 06:44:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.757 06:44:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.757 06:44:17 -- common/autotest_common.sh@10 -- # set +x 00:04:10.757 ************************************ 00:04:10.757 START TEST json_config_extra_key 00:04:10.757 ************************************ 00:04:10.757 06:44:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:10.757 06:44:17 -- nvmf/common.sh@7 -- # uname -s 00:04:10.757 06:44:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:10.757 06:44:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:10.757 06:44:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:10.757 06:44:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:10.757 06:44:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:10.757 06:44:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:10.757 06:44:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:10.757 06:44:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:10.757 06:44:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:10.757 06:44:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:10.757 06:44:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:10.757 06:44:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:10.757 06:44:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:10.757 06:44:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:10.757 06:44:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:10.757 06:44:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:10.757 06:44:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:10.757 06:44:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:10.757 06:44:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:10.757 06:44:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.757 06:44:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.757 06:44:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.757 06:44:17 -- paths/export.sh@5 -- # export PATH 00:04:10.757 06:44:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.757 06:44:17 -- nvmf/common.sh@46 -- # : 0 00:04:10.757 06:44:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:10.757 06:44:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:10.757 06:44:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:10.757 06:44:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:10.757 06:44:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:10.757 06:44:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:10.757 06:44:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:10.757 06:44:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:10.757 INFO: launching applications... 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2908643 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:10.757 Waiting for target to run... 00:04:10.757 06:44:17 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2908643 /var/tmp/spdk_tgt.sock 00:04:10.757 06:44:17 -- common/autotest_common.sh@819 -- # '[' -z 2908643 ']' 00:04:10.757 06:44:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:10.757 06:44:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:10.757 06:44:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:10.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:10.757 06:44:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:10.757 06:44:17 -- common/autotest_common.sh@10 -- # set +x 00:04:10.757 [2024-05-12 06:44:17.628107] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:10.757 [2024-05-12 06:44:17.628199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2908643 ] 00:04:10.757 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.017 [2024-05-12 06:44:18.128499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.275 [2024-05-12 06:44:18.233444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:11.275 [2024-05-12 06:44:18.233634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.535 06:44:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:11.535 06:44:18 -- common/autotest_common.sh@852 -- # return 0 00:04:11.535 06:44:18 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:11.535 00:04:11.535 06:44:18 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:11.535 INFO: shutting down applications... 00:04:11.535 06:44:18 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:11.535 06:44:18 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:11.535 06:44:18 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:11.535 06:44:18 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 2908643 ]] 00:04:11.535 06:44:18 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2908643 00:04:11.535 06:44:18 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:11.535 06:44:18 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:11.535 06:44:18 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2908643 00:04:11.535 06:44:18 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:12.104 06:44:19 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:12.104 06:44:19 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:12.104 06:44:19 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2908643 00:04:12.104 06:44:19 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:12.104 06:44:19 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:12.104 06:44:19 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:12.104 06:44:19 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:12.104 SPDK target shutdown done 00:04:12.104 06:44:19 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:12.104 Success 00:04:12.104 00:04:12.104 real 0m1.576s 00:04:12.104 user 0m1.468s 00:04:12.104 sys 0m0.580s 00:04:12.104 06:44:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.104 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:04:12.104 ************************************ 00:04:12.104 END TEST json_config_extra_key 00:04:12.104 ************************************ 00:04:12.104 06:44:19 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:12.104 06:44:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.104 06:44:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.104 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:04:12.104 ************************************ 00:04:12.104 START TEST alias_rpc 00:04:12.104 ************************************ 00:04:12.104 06:44:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:12.104 * Looking for test storage... 00:04:12.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:12.104 06:44:19 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:12.104 06:44:19 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2908946 00:04:12.104 06:44:19 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.104 06:44:19 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2908946 00:04:12.104 06:44:19 -- common/autotest_common.sh@819 -- # '[' -z 2908946 ']' 00:04:12.104 06:44:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.104 06:44:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:12.104 06:44:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.104 06:44:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:12.104 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:04:12.104 [2024-05-12 06:44:19.228667] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:12.104 [2024-05-12 06:44:19.228785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2908946 ] 00:04:12.362 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.362 [2024-05-12 06:44:19.292079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.362 [2024-05-12 06:44:19.406547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:12.362 [2024-05-12 06:44:19.406739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.300 06:44:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:13.300 06:44:20 -- common/autotest_common.sh@852 -- # return 0 00:04:13.300 06:44:20 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:13.559 06:44:20 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2908946 00:04:13.559 06:44:20 -- common/autotest_common.sh@926 -- # '[' -z 2908946 ']' 00:04:13.559 06:44:20 -- common/autotest_common.sh@930 -- # kill -0 2908946 00:04:13.559 06:44:20 -- common/autotest_common.sh@931 -- # uname 00:04:13.559 06:44:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:13.559 06:44:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2908946 00:04:13.559 06:44:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:13.559 06:44:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:13.559 06:44:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2908946' 00:04:13.559 killing process with pid 2908946 00:04:13.559 06:44:20 -- common/autotest_common.sh@945 -- # kill 2908946 00:04:13.559 06:44:20 -- common/autotest_common.sh@950 -- # wait 2908946 00:04:14.127 00:04:14.127 real 0m1.826s 00:04:14.127 user 0m2.138s 00:04:14.127 sys 0m0.454s 00:04:14.127 06:44:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.127 06:44:20 -- common/autotest_common.sh@10 -- # set +x 00:04:14.127 ************************************ 00:04:14.127 END TEST alias_rpc 00:04:14.127 ************************************ 00:04:14.127 06:44:20 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:14.127 06:44:20 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:14.127 06:44:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.127 06:44:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.127 06:44:20 -- common/autotest_common.sh@10 -- # set +x 00:04:14.127 ************************************ 00:04:14.127 START TEST spdkcli_tcp 00:04:14.127 ************************************ 00:04:14.127 06:44:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:14.127 * Looking for test storage... 00:04:14.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:14.127 06:44:21 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:14.127 06:44:21 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:14.127 06:44:21 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:14.127 06:44:21 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:14.127 06:44:21 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:14.127 06:44:21 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:14.127 06:44:21 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:14.127 06:44:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:14.127 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:04:14.127 06:44:21 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2909150 00:04:14.127 06:44:21 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:14.127 06:44:21 -- spdkcli/tcp.sh@27 -- # waitforlisten 2909150 00:04:14.127 06:44:21 -- common/autotest_common.sh@819 -- # '[' -z 2909150 ']' 00:04:14.127 06:44:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.127 06:44:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:14.127 06:44:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.127 06:44:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:14.127 06:44:21 -- common/autotest_common.sh@10 -- # set +x 00:04:14.127 [2024-05-12 06:44:21.082795] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:14.127 [2024-05-12 06:44:21.082883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909150 ] 00:04:14.127 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.127 [2024-05-12 06:44:21.142190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.127 [2024-05-12 06:44:21.246078] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:14.127 [2024-05-12 06:44:21.246279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.127 [2024-05-12 06:44:21.246280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.061 06:44:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:15.061 06:44:21 -- common/autotest_common.sh@852 -- # return 0 00:04:15.061 06:44:21 -- spdkcli/tcp.sh@31 -- # socat_pid=2909291 00:04:15.061 06:44:21 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:15.061 06:44:21 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:15.318 [ 00:04:15.318 "bdev_malloc_delete", 00:04:15.318 "bdev_malloc_create", 00:04:15.318 "bdev_null_resize", 00:04:15.318 "bdev_null_delete", 00:04:15.318 "bdev_null_create", 00:04:15.318 "bdev_nvme_cuse_unregister", 00:04:15.318 "bdev_nvme_cuse_register", 00:04:15.318 "bdev_opal_new_user", 00:04:15.318 "bdev_opal_set_lock_state", 00:04:15.318 "bdev_opal_delete", 00:04:15.318 "bdev_opal_get_info", 00:04:15.318 "bdev_opal_create", 00:04:15.318 "bdev_nvme_opal_revert", 00:04:15.318 "bdev_nvme_opal_init", 00:04:15.318 "bdev_nvme_send_cmd", 00:04:15.318 "bdev_nvme_get_path_iostat", 00:04:15.318 "bdev_nvme_get_mdns_discovery_info", 00:04:15.318 "bdev_nvme_stop_mdns_discovery", 00:04:15.318 "bdev_nvme_start_mdns_discovery", 00:04:15.318 "bdev_nvme_set_multipath_policy", 00:04:15.318 "bdev_nvme_set_preferred_path", 00:04:15.318 "bdev_nvme_get_io_paths", 00:04:15.318 "bdev_nvme_remove_error_injection", 00:04:15.318 "bdev_nvme_add_error_injection", 00:04:15.318 "bdev_nvme_get_discovery_info", 00:04:15.318 "bdev_nvme_stop_discovery", 00:04:15.318 "bdev_nvme_start_discovery", 00:04:15.318 "bdev_nvme_get_controller_health_info", 00:04:15.318 "bdev_nvme_disable_controller", 00:04:15.318 "bdev_nvme_enable_controller", 00:04:15.318 "bdev_nvme_reset_controller", 00:04:15.318 "bdev_nvme_get_transport_statistics", 00:04:15.318 "bdev_nvme_apply_firmware", 00:04:15.318 "bdev_nvme_detach_controller", 00:04:15.318 "bdev_nvme_get_controllers", 00:04:15.318 "bdev_nvme_attach_controller", 00:04:15.318 "bdev_nvme_set_hotplug", 00:04:15.318 "bdev_nvme_set_options", 00:04:15.318 "bdev_passthru_delete", 00:04:15.318 "bdev_passthru_create", 00:04:15.318 "bdev_lvol_grow_lvstore", 00:04:15.318 "bdev_lvol_get_lvols", 00:04:15.318 "bdev_lvol_get_lvstores", 00:04:15.318 "bdev_lvol_delete", 00:04:15.318 "bdev_lvol_set_read_only", 00:04:15.318 "bdev_lvol_resize", 00:04:15.318 "bdev_lvol_decouple_parent", 00:04:15.318 "bdev_lvol_inflate", 00:04:15.318 "bdev_lvol_rename", 00:04:15.318 "bdev_lvol_clone_bdev", 00:04:15.318 "bdev_lvol_clone", 00:04:15.318 "bdev_lvol_snapshot", 00:04:15.318 "bdev_lvol_create", 00:04:15.318 "bdev_lvol_delete_lvstore", 00:04:15.318 "bdev_lvol_rename_lvstore", 00:04:15.318 "bdev_lvol_create_lvstore", 00:04:15.318 "bdev_raid_set_options", 00:04:15.318 "bdev_raid_remove_base_bdev", 00:04:15.318 "bdev_raid_add_base_bdev", 00:04:15.318 "bdev_raid_delete", 00:04:15.318 "bdev_raid_create", 00:04:15.318 "bdev_raid_get_bdevs", 00:04:15.318 "bdev_error_inject_error", 00:04:15.318 "bdev_error_delete", 00:04:15.318 "bdev_error_create", 00:04:15.318 "bdev_split_delete", 00:04:15.318 "bdev_split_create", 00:04:15.318 "bdev_delay_delete", 00:04:15.318 "bdev_delay_create", 00:04:15.318 "bdev_delay_update_latency", 00:04:15.318 "bdev_zone_block_delete", 00:04:15.318 "bdev_zone_block_create", 00:04:15.318 "blobfs_create", 00:04:15.318 "blobfs_detect", 00:04:15.318 "blobfs_set_cache_size", 00:04:15.318 "bdev_aio_delete", 00:04:15.318 "bdev_aio_rescan", 00:04:15.318 "bdev_aio_create", 00:04:15.318 "bdev_ftl_set_property", 00:04:15.318 "bdev_ftl_get_properties", 00:04:15.318 "bdev_ftl_get_stats", 00:04:15.318 "bdev_ftl_unmap", 00:04:15.318 "bdev_ftl_unload", 00:04:15.318 "bdev_ftl_delete", 00:04:15.318 "bdev_ftl_load", 00:04:15.318 "bdev_ftl_create", 00:04:15.318 "bdev_virtio_attach_controller", 00:04:15.318 "bdev_virtio_scsi_get_devices", 00:04:15.318 "bdev_virtio_detach_controller", 00:04:15.318 "bdev_virtio_blk_set_hotplug", 00:04:15.318 "bdev_iscsi_delete", 00:04:15.318 "bdev_iscsi_create", 00:04:15.318 "bdev_iscsi_set_options", 00:04:15.318 "accel_error_inject_error", 00:04:15.318 "ioat_scan_accel_module", 00:04:15.318 "dsa_scan_accel_module", 00:04:15.318 "iaa_scan_accel_module", 00:04:15.318 "iscsi_set_options", 00:04:15.318 "iscsi_get_auth_groups", 00:04:15.318 "iscsi_auth_group_remove_secret", 00:04:15.318 "iscsi_auth_group_add_secret", 00:04:15.318 "iscsi_delete_auth_group", 00:04:15.318 "iscsi_create_auth_group", 00:04:15.318 "iscsi_set_discovery_auth", 00:04:15.318 "iscsi_get_options", 00:04:15.318 "iscsi_target_node_request_logout", 00:04:15.318 "iscsi_target_node_set_redirect", 00:04:15.318 "iscsi_target_node_set_auth", 00:04:15.318 "iscsi_target_node_add_lun", 00:04:15.318 "iscsi_get_connections", 00:04:15.318 "iscsi_portal_group_set_auth", 00:04:15.318 "iscsi_start_portal_group", 00:04:15.318 "iscsi_delete_portal_group", 00:04:15.318 "iscsi_create_portal_group", 00:04:15.318 "iscsi_get_portal_groups", 00:04:15.318 "iscsi_delete_target_node", 00:04:15.318 "iscsi_target_node_remove_pg_ig_maps", 00:04:15.318 "iscsi_target_node_add_pg_ig_maps", 00:04:15.318 "iscsi_create_target_node", 00:04:15.318 "iscsi_get_target_nodes", 00:04:15.318 "iscsi_delete_initiator_group", 00:04:15.318 "iscsi_initiator_group_remove_initiators", 00:04:15.318 "iscsi_initiator_group_add_initiators", 00:04:15.318 "iscsi_create_initiator_group", 00:04:15.318 "iscsi_get_initiator_groups", 00:04:15.318 "nvmf_set_crdt", 00:04:15.318 "nvmf_set_config", 00:04:15.318 "nvmf_set_max_subsystems", 00:04:15.318 "nvmf_subsystem_get_listeners", 00:04:15.318 "nvmf_subsystem_get_qpairs", 00:04:15.318 "nvmf_subsystem_get_controllers", 00:04:15.318 "nvmf_get_stats", 00:04:15.318 "nvmf_get_transports", 00:04:15.318 "nvmf_create_transport", 00:04:15.318 "nvmf_get_targets", 00:04:15.318 "nvmf_delete_target", 00:04:15.318 "nvmf_create_target", 00:04:15.318 "nvmf_subsystem_allow_any_host", 00:04:15.318 "nvmf_subsystem_remove_host", 00:04:15.318 "nvmf_subsystem_add_host", 00:04:15.318 "nvmf_subsystem_remove_ns", 00:04:15.318 "nvmf_subsystem_add_ns", 00:04:15.318 "nvmf_subsystem_listener_set_ana_state", 00:04:15.318 "nvmf_discovery_get_referrals", 00:04:15.318 "nvmf_discovery_remove_referral", 00:04:15.318 "nvmf_discovery_add_referral", 00:04:15.318 "nvmf_subsystem_remove_listener", 00:04:15.318 "nvmf_subsystem_add_listener", 00:04:15.318 "nvmf_delete_subsystem", 00:04:15.318 "nvmf_create_subsystem", 00:04:15.318 "nvmf_get_subsystems", 00:04:15.318 "env_dpdk_get_mem_stats", 00:04:15.318 "nbd_get_disks", 00:04:15.318 "nbd_stop_disk", 00:04:15.318 "nbd_start_disk", 00:04:15.318 "ublk_recover_disk", 00:04:15.318 "ublk_get_disks", 00:04:15.318 "ublk_stop_disk", 00:04:15.318 "ublk_start_disk", 00:04:15.318 "ublk_destroy_target", 00:04:15.318 "ublk_create_target", 00:04:15.318 "virtio_blk_create_transport", 00:04:15.318 "virtio_blk_get_transports", 00:04:15.318 "vhost_controller_set_coalescing", 00:04:15.318 "vhost_get_controllers", 00:04:15.318 "vhost_delete_controller", 00:04:15.318 "vhost_create_blk_controller", 00:04:15.318 "vhost_scsi_controller_remove_target", 00:04:15.318 "vhost_scsi_controller_add_target", 00:04:15.318 "vhost_start_scsi_controller", 00:04:15.318 "vhost_create_scsi_controller", 00:04:15.318 "thread_set_cpumask", 00:04:15.318 "framework_get_scheduler", 00:04:15.318 "framework_set_scheduler", 00:04:15.318 "framework_get_reactors", 00:04:15.318 "thread_get_io_channels", 00:04:15.318 "thread_get_pollers", 00:04:15.318 "thread_get_stats", 00:04:15.318 "framework_monitor_context_switch", 00:04:15.318 "spdk_kill_instance", 00:04:15.318 "log_enable_timestamps", 00:04:15.318 "log_get_flags", 00:04:15.318 "log_clear_flag", 00:04:15.318 "log_set_flag", 00:04:15.318 "log_get_level", 00:04:15.318 "log_set_level", 00:04:15.318 "log_get_print_level", 00:04:15.318 "log_set_print_level", 00:04:15.318 "framework_enable_cpumask_locks", 00:04:15.318 "framework_disable_cpumask_locks", 00:04:15.318 "framework_wait_init", 00:04:15.318 "framework_start_init", 00:04:15.318 "scsi_get_devices", 00:04:15.318 "bdev_get_histogram", 00:04:15.318 "bdev_enable_histogram", 00:04:15.319 "bdev_set_qos_limit", 00:04:15.319 "bdev_set_qd_sampling_period", 00:04:15.319 "bdev_get_bdevs", 00:04:15.319 "bdev_reset_iostat", 00:04:15.319 "bdev_get_iostat", 00:04:15.319 "bdev_examine", 00:04:15.319 "bdev_wait_for_examine", 00:04:15.319 "bdev_set_options", 00:04:15.319 "notify_get_notifications", 00:04:15.319 "notify_get_types", 00:04:15.319 "accel_get_stats", 00:04:15.319 "accel_set_options", 00:04:15.319 "accel_set_driver", 00:04:15.319 "accel_crypto_key_destroy", 00:04:15.319 "accel_crypto_keys_get", 00:04:15.319 "accel_crypto_key_create", 00:04:15.319 "accel_assign_opc", 00:04:15.319 "accel_get_module_info", 00:04:15.319 "accel_get_opc_assignments", 00:04:15.319 "vmd_rescan", 00:04:15.319 "vmd_remove_device", 00:04:15.319 "vmd_enable", 00:04:15.319 "sock_set_default_impl", 00:04:15.319 "sock_impl_set_options", 00:04:15.319 "sock_impl_get_options", 00:04:15.319 "iobuf_get_stats", 00:04:15.319 "iobuf_set_options", 00:04:15.319 "framework_get_pci_devices", 00:04:15.319 "framework_get_config", 00:04:15.319 "framework_get_subsystems", 00:04:15.319 "trace_get_info", 00:04:15.319 "trace_get_tpoint_group_mask", 00:04:15.319 "trace_disable_tpoint_group", 00:04:15.319 "trace_enable_tpoint_group", 00:04:15.319 "trace_clear_tpoint_mask", 00:04:15.319 "trace_set_tpoint_mask", 00:04:15.319 "spdk_get_version", 00:04:15.319 "rpc_get_methods" 00:04:15.319 ] 00:04:15.319 06:44:22 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:15.319 06:44:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:15.319 06:44:22 -- common/autotest_common.sh@10 -- # set +x 00:04:15.319 06:44:22 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:15.319 06:44:22 -- spdkcli/tcp.sh@38 -- # killprocess 2909150 00:04:15.319 06:44:22 -- common/autotest_common.sh@926 -- # '[' -z 2909150 ']' 00:04:15.319 06:44:22 -- common/autotest_common.sh@930 -- # kill -0 2909150 00:04:15.319 06:44:22 -- common/autotest_common.sh@931 -- # uname 00:04:15.319 06:44:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:15.319 06:44:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2909150 00:04:15.319 06:44:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:15.319 06:44:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:15.319 06:44:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2909150' 00:04:15.319 killing process with pid 2909150 00:04:15.319 06:44:22 -- common/autotest_common.sh@945 -- # kill 2909150 00:04:15.319 06:44:22 -- common/autotest_common.sh@950 -- # wait 2909150 00:04:15.888 00:04:15.888 real 0m1.744s 00:04:15.888 user 0m3.338s 00:04:15.888 sys 0m0.455s 00:04:15.888 06:44:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.888 06:44:22 -- common/autotest_common.sh@10 -- # set +x 00:04:15.888 ************************************ 00:04:15.888 END TEST spdkcli_tcp 00:04:15.888 ************************************ 00:04:15.888 06:44:22 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:15.888 06:44:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:15.888 06:44:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.888 06:44:22 -- common/autotest_common.sh@10 -- # set +x 00:04:15.888 ************************************ 00:04:15.888 START TEST dpdk_mem_utility 00:04:15.888 ************************************ 00:04:15.888 06:44:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:15.888 * Looking for test storage... 00:04:15.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:15.888 06:44:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:15.888 06:44:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2909487 00:04:15.888 06:44:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.888 06:44:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2909487 00:04:15.888 06:44:22 -- common/autotest_common.sh@819 -- # '[' -z 2909487 ']' 00:04:15.888 06:44:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.889 06:44:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:15.889 06:44:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.889 06:44:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:15.889 06:44:22 -- common/autotest_common.sh@10 -- # set +x 00:04:15.889 [2024-05-12 06:44:22.848438] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:15.889 [2024-05-12 06:44:22.848526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909487 ] 00:04:15.889 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.889 [2024-05-12 06:44:22.908641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.889 [2024-05-12 06:44:23.012674] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:15.889 [2024-05-12 06:44:23.012878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.826 06:44:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:16.826 06:44:23 -- common/autotest_common.sh@852 -- # return 0 00:04:16.826 06:44:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:16.826 06:44:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:16.826 06:44:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:16.826 06:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:16.826 { 00:04:16.826 "filename": "/tmp/spdk_mem_dump.txt" 00:04:16.826 } 00:04:16.826 06:44:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:16.826 06:44:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:16.826 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:16.826 1 heaps totaling size 814.000000 MiB 00:04:16.826 size: 814.000000 MiB heap id: 0 00:04:16.826 end heaps---------- 00:04:16.826 8 mempools totaling size 598.116089 MiB 00:04:16.826 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:16.826 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:16.826 size: 84.521057 MiB name: bdev_io_2909487 00:04:16.826 size: 51.011292 MiB name: evtpool_2909487 00:04:16.826 size: 50.003479 MiB name: msgpool_2909487 00:04:16.826 size: 21.763794 MiB name: PDU_Pool 00:04:16.826 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:16.826 size: 0.026123 MiB name: Session_Pool 00:04:16.826 end mempools------- 00:04:16.826 6 memzones totaling size 4.142822 MiB 00:04:16.826 size: 1.000366 MiB name: RG_ring_0_2909487 00:04:16.826 size: 1.000366 MiB name: RG_ring_1_2909487 00:04:16.826 size: 1.000366 MiB name: RG_ring_4_2909487 00:04:16.826 size: 1.000366 MiB name: RG_ring_5_2909487 00:04:16.826 size: 0.125366 MiB name: RG_ring_2_2909487 00:04:16.826 size: 0.015991 MiB name: RG_ring_3_2909487 00:04:16.826 end memzones------- 00:04:16.826 06:44:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:16.826 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:16.826 list of free elements. size: 12.519348 MiB 00:04:16.826 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:16.826 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:16.826 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:16.826 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:16.826 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:16.826 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:16.826 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:16.826 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:16.826 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:16.826 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:16.826 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:16.826 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:16.826 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:16.826 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:16.826 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:16.826 list of standard malloc elements. size: 199.218079 MiB 00:04:16.826 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:16.826 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:16.826 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:16.826 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:16.826 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:16.826 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:16.826 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:16.826 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:16.826 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:16.826 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:16.826 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:16.827 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:16.827 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:16.827 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:16.827 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:16.827 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:16.827 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:16.827 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:16.827 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:16.827 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:16.827 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:16.827 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:16.827 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:16.827 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:16.827 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:16.827 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:16.827 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:16.827 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:16.827 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:16.827 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:16.827 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:16.827 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:16.827 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:16.827 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:16.827 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:16.827 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:16.827 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:16.827 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:16.827 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:16.827 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:16.827 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:16.827 list of memzone associated elements. size: 602.262573 MiB 00:04:16.827 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:16.827 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:16.827 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:16.827 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:16.827 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:16.827 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2909487_0 00:04:16.827 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:16.827 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2909487_0 00:04:16.827 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:16.827 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2909487_0 00:04:16.827 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:16.827 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:16.827 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:16.827 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:16.827 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:16.827 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2909487 00:04:16.827 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:16.827 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2909487 00:04:16.827 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:16.827 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2909487 00:04:16.827 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:16.827 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:16.827 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:16.827 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:16.827 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:16.827 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:16.827 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:16.827 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:16.827 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:16.827 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2909487 00:04:16.827 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:16.827 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2909487 00:04:16.827 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:16.827 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2909487 00:04:16.827 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:16.827 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2909487 00:04:16.827 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:16.827 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2909487 00:04:16.827 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:16.827 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:16.827 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:16.827 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:16.827 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:16.827 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:16.827 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:16.827 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2909487 00:04:16.827 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:16.827 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:16.827 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:16.827 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:16.827 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:16.827 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2909487 00:04:16.827 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:16.827 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:16.827 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:16.827 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2909487 00:04:16.827 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:16.827 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2909487 00:04:16.827 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:16.827 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:16.827 06:44:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:16.827 06:44:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2909487 00:04:16.827 06:44:23 -- common/autotest_common.sh@926 -- # '[' -z 2909487 ']' 00:04:16.827 06:44:23 -- common/autotest_common.sh@930 -- # kill -0 2909487 00:04:16.827 06:44:23 -- common/autotest_common.sh@931 -- # uname 00:04:16.827 06:44:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:16.827 06:44:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2909487 00:04:16.827 06:44:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:16.827 06:44:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:16.827 06:44:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2909487' 00:04:16.827 killing process with pid 2909487 00:04:16.827 06:44:23 -- common/autotest_common.sh@945 -- # kill 2909487 00:04:16.827 06:44:23 -- common/autotest_common.sh@950 -- # wait 2909487 00:04:17.395 00:04:17.395 real 0m1.636s 00:04:17.395 user 0m1.818s 00:04:17.395 sys 0m0.423s 00:04:17.395 06:44:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.395 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:04:17.395 ************************************ 00:04:17.395 END TEST dpdk_mem_utility 00:04:17.395 ************************************ 00:04:17.395 06:44:24 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:17.395 06:44:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:17.395 06:44:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:17.395 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:04:17.395 ************************************ 00:04:17.395 START TEST event 00:04:17.395 ************************************ 00:04:17.395 06:44:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:17.395 * Looking for test storage... 00:04:17.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:17.395 06:44:24 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:17.395 06:44:24 -- bdev/nbd_common.sh@6 -- # set -e 00:04:17.395 06:44:24 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:17.395 06:44:24 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:17.395 06:44:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:17.395 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:04:17.395 ************************************ 00:04:17.395 START TEST event_perf 00:04:17.395 ************************************ 00:04:17.395 06:44:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:17.395 Running I/O for 1 seconds...[2024-05-12 06:44:24.490249] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:17.395 [2024-05-12 06:44:24.490330] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909679 ] 00:04:17.395 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.653 [2024-05-12 06:44:24.555793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:17.653 [2024-05-12 06:44:24.672477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.653 [2024-05-12 06:44:24.672544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:17.653 [2024-05-12 06:44:24.672635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:17.653 [2024-05-12 06:44:24.672638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.031 Running I/O for 1 seconds... 00:04:19.031 lcore 0: 233747 00:04:19.031 lcore 1: 233747 00:04:19.031 lcore 2: 233747 00:04:19.031 lcore 3: 233747 00:04:19.031 done. 00:04:19.031 00:04:19.031 real 0m1.322s 00:04:19.031 user 0m4.227s 00:04:19.031 sys 0m0.091s 00:04:19.031 06:44:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.031 06:44:25 -- common/autotest_common.sh@10 -- # set +x 00:04:19.031 ************************************ 00:04:19.031 END TEST event_perf 00:04:19.031 ************************************ 00:04:19.031 06:44:25 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:19.031 06:44:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:19.031 06:44:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.031 06:44:25 -- common/autotest_common.sh@10 -- # set +x 00:04:19.031 ************************************ 00:04:19.031 START TEST event_reactor 00:04:19.031 ************************************ 00:04:19.031 06:44:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:19.031 [2024-05-12 06:44:25.837782] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:19.031 [2024-05-12 06:44:25.837859] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909842 ] 00:04:19.031 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.031 [2024-05-12 06:44:25.900661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.031 [2024-05-12 06:44:26.018709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.412 test_start 00:04:20.412 oneshot 00:04:20.412 tick 100 00:04:20.412 tick 100 00:04:20.412 tick 250 00:04:20.412 tick 100 00:04:20.412 tick 100 00:04:20.412 tick 100 00:04:20.412 tick 250 00:04:20.412 tick 500 00:04:20.412 tick 100 00:04:20.412 tick 100 00:04:20.412 tick 250 00:04:20.412 tick 100 00:04:20.412 tick 100 00:04:20.412 test_end 00:04:20.412 00:04:20.412 real 0m1.314s 00:04:20.412 user 0m1.226s 00:04:20.412 sys 0m0.083s 00:04:20.412 06:44:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.412 06:44:27 -- common/autotest_common.sh@10 -- # set +x 00:04:20.412 ************************************ 00:04:20.412 END TEST event_reactor 00:04:20.412 ************************************ 00:04:20.412 06:44:27 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:20.412 06:44:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:20.412 06:44:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.412 06:44:27 -- common/autotest_common.sh@10 -- # set +x 00:04:20.412 ************************************ 00:04:20.412 START TEST event_reactor_perf 00:04:20.412 ************************************ 00:04:20.412 06:44:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:20.412 [2024-05-12 06:44:27.175385] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:20.412 [2024-05-12 06:44:27.175465] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910117 ] 00:04:20.412 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.412 [2024-05-12 06:44:27.234385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.412 [2024-05-12 06:44:27.353273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.347 test_start 00:04:21.347 test_end 00:04:21.347 Performance: 350777 events per second 00:04:21.347 00:04:21.347 real 0m1.307s 00:04:21.347 user 0m1.226s 00:04:21.347 sys 0m0.076s 00:04:21.347 06:44:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.347 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.347 ************************************ 00:04:21.347 END TEST event_reactor_perf 00:04:21.347 ************************************ 00:04:21.605 06:44:28 -- event/event.sh@49 -- # uname -s 00:04:21.605 06:44:28 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:21.605 06:44:28 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:21.605 06:44:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.605 06:44:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.605 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.605 ************************************ 00:04:21.605 START TEST event_scheduler 00:04:21.605 ************************************ 00:04:21.605 06:44:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:21.605 * Looking for test storage... 00:04:21.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:21.605 06:44:28 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:21.605 06:44:28 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2910306 00:04:21.605 06:44:28 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:21.605 06:44:28 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.605 06:44:28 -- scheduler/scheduler.sh@37 -- # waitforlisten 2910306 00:04:21.605 06:44:28 -- common/autotest_common.sh@819 -- # '[' -z 2910306 ']' 00:04:21.605 06:44:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.605 06:44:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:21.605 06:44:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.605 06:44:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:21.605 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.605 [2024-05-12 06:44:28.585755] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:21.605 [2024-05-12 06:44:28.585842] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910306 ] 00:04:21.605 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.605 [2024-05-12 06:44:28.643025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:21.865 [2024-05-12 06:44:28.750181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.865 [2024-05-12 06:44:28.750259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:21.865 [2024-05-12 06:44:28.750263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:21.865 [2024-05-12 06:44:28.750203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.865 06:44:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:21.865 06:44:28 -- common/autotest_common.sh@852 -- # return 0 00:04:21.865 06:44:28 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:21.865 06:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:21.865 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.865 POWER: Env isn't set yet! 00:04:21.865 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:21.865 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:21.865 POWER: Cannot get available frequencies of lcore 0 00:04:21.865 POWER: Attempting to initialise PSTAT power management... 00:04:21.865 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:21.865 POWER: Initialized successfully for lcore 0 power management 00:04:21.865 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:21.865 POWER: Initialized successfully for lcore 1 power management 00:04:21.865 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:21.865 POWER: Initialized successfully for lcore 2 power management 00:04:21.865 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:21.865 POWER: Initialized successfully for lcore 3 power management 00:04:21.865 06:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:21.865 06:44:28 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:21.865 06:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:21.865 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.865 [2024-05-12 06:44:28.913946] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:21.865 06:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:21.865 06:44:28 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:21.865 06:44:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.865 06:44:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.865 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.865 ************************************ 00:04:21.865 START TEST scheduler_create_thread 00:04:21.865 ************************************ 00:04:21.865 06:44:28 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:21.865 06:44:28 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:21.865 06:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:21.865 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.865 2 00:04:21.865 06:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:21.865 06:44:28 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:21.865 06:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:21.865 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.865 3 00:04:21.865 06:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:21.865 06:44:28 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:21.865 06:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:21.865 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.865 4 00:04:21.865 06:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:21.865 06:44:28 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:21.865 06:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:21.865 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.865 5 00:04:21.865 06:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:21.865 06:44:28 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:21.865 06:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:21.865 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.865 6 00:04:21.865 06:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:21.865 06:44:28 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:21.865 06:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:21.865 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.865 7 00:04:21.865 06:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:21.865 06:44:28 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:21.865 06:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:21.865 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.865 8 00:04:21.865 06:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:21.865 06:44:28 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:21.865 06:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:21.865 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:22.125 9 00:04:22.125 06:44:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:22.125 06:44:28 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:22.125 06:44:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:22.125 06:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:22.125 10 00:04:22.125 06:44:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:22.125 06:44:29 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:22.125 06:44:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:22.125 06:44:29 -- common/autotest_common.sh@10 -- # set +x 00:04:22.125 06:44:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:22.125 06:44:29 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:22.125 06:44:29 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:22.125 06:44:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:22.125 06:44:29 -- common/autotest_common.sh@10 -- # set +x 00:04:22.125 06:44:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:22.125 06:44:29 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:22.125 06:44:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:22.125 06:44:29 -- common/autotest_common.sh@10 -- # set +x 00:04:22.383 06:44:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:22.383 06:44:29 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:22.383 06:44:29 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:22.383 06:44:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:22.383 06:44:29 -- common/autotest_common.sh@10 -- # set +x 00:04:23.757 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:23.757 00:04:23.757 real 0m1.654s 00:04:23.757 user 0m0.014s 00:04:23.757 sys 0m0.001s 00:04:23.757 06:44:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.757 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:04:23.757 ************************************ 00:04:23.757 END TEST scheduler_create_thread 00:04:23.757 ************************************ 00:04:23.757 06:44:30 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:23.757 06:44:30 -- scheduler/scheduler.sh@46 -- # killprocess 2910306 00:04:23.757 06:44:30 -- common/autotest_common.sh@926 -- # '[' -z 2910306 ']' 00:04:23.757 06:44:30 -- common/autotest_common.sh@930 -- # kill -0 2910306 00:04:23.757 06:44:30 -- common/autotest_common.sh@931 -- # uname 00:04:23.757 06:44:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:23.757 06:44:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2910306 00:04:23.757 06:44:30 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:23.757 06:44:30 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:23.757 06:44:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2910306' 00:04:23.757 killing process with pid 2910306 00:04:23.757 06:44:30 -- common/autotest_common.sh@945 -- # kill 2910306 00:04:23.757 06:44:30 -- common/autotest_common.sh@950 -- # wait 2910306 00:04:24.015 [2024-05-12 06:44:31.053875] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:24.309 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:24.309 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:24.309 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:24.309 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:24.309 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:24.309 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:24.309 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:24.309 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:24.309 00:04:24.309 real 0m2.815s 00:04:24.309 user 0m3.617s 00:04:24.309 sys 0m0.299s 00:04:24.309 06:44:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.309 06:44:31 -- common/autotest_common.sh@10 -- # set +x 00:04:24.309 ************************************ 00:04:24.309 END TEST event_scheduler 00:04:24.309 ************************************ 00:04:24.309 06:44:31 -- event/event.sh@51 -- # modprobe -n nbd 00:04:24.309 06:44:31 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:24.309 06:44:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.309 06:44:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.309 06:44:31 -- common/autotest_common.sh@10 -- # set +x 00:04:24.309 ************************************ 00:04:24.309 START TEST app_repeat 00:04:24.309 ************************************ 00:04:24.309 06:44:31 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:24.309 06:44:31 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.309 06:44:31 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.309 06:44:31 -- event/event.sh@13 -- # local nbd_list 00:04:24.309 06:44:31 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.309 06:44:31 -- event/event.sh@14 -- # local bdev_list 00:04:24.309 06:44:31 -- event/event.sh@15 -- # local repeat_times=4 00:04:24.309 06:44:31 -- event/event.sh@17 -- # modprobe nbd 00:04:24.309 06:44:31 -- event/event.sh@19 -- # repeat_pid=2910633 00:04:24.309 06:44:31 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:24.309 06:44:31 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.309 06:44:31 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2910633' 00:04:24.309 Process app_repeat pid: 2910633 00:04:24.309 06:44:31 -- event/event.sh@23 -- # for i in {0..2} 00:04:24.309 06:44:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:24.309 spdk_app_start Round 0 00:04:24.309 06:44:31 -- event/event.sh@25 -- # waitforlisten 2910633 /var/tmp/spdk-nbd.sock 00:04:24.309 06:44:31 -- common/autotest_common.sh@819 -- # '[' -z 2910633 ']' 00:04:24.309 06:44:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.309 06:44:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:24.309 06:44:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.309 06:44:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:24.309 06:44:31 -- common/autotest_common.sh@10 -- # set +x 00:04:24.309 [2024-05-12 06:44:31.374925] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:24.309 [2024-05-12 06:44:31.375038] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910633 ] 00:04:24.309 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.589 [2024-05-12 06:44:31.439805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.589 [2024-05-12 06:44:31.556810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.589 [2024-05-12 06:44:31.556816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.522 06:44:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:25.522 06:44:32 -- common/autotest_common.sh@852 -- # return 0 00:04:25.522 06:44:32 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.522 Malloc0 00:04:25.522 06:44:32 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.780 Malloc1 00:04:25.780 06:44:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@12 -- # local i 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.780 06:44:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:26.038 /dev/nbd0 00:04:26.038 06:44:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:26.038 06:44:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:26.038 06:44:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:26.038 06:44:33 -- common/autotest_common.sh@857 -- # local i 00:04:26.038 06:44:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:26.038 06:44:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:26.038 06:44:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:26.038 06:44:33 -- common/autotest_common.sh@861 -- # break 00:04:26.038 06:44:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:26.038 06:44:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:26.038 06:44:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.038 1+0 records in 00:04:26.038 1+0 records out 00:04:26.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174478 s, 23.5 MB/s 00:04:26.038 06:44:33 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.038 06:44:33 -- common/autotest_common.sh@874 -- # size=4096 00:04:26.038 06:44:33 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.038 06:44:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:26.038 06:44:33 -- common/autotest_common.sh@877 -- # return 0 00:04:26.038 06:44:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.038 06:44:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.038 06:44:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:26.295 /dev/nbd1 00:04:26.295 06:44:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:26.295 06:44:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:26.295 06:44:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:26.295 06:44:33 -- common/autotest_common.sh@857 -- # local i 00:04:26.295 06:44:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:26.295 06:44:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:26.295 06:44:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:26.295 06:44:33 -- common/autotest_common.sh@861 -- # break 00:04:26.295 06:44:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:26.295 06:44:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:26.295 06:44:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.295 1+0 records in 00:04:26.295 1+0 records out 00:04:26.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190935 s, 21.5 MB/s 00:04:26.295 06:44:33 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.295 06:44:33 -- common/autotest_common.sh@874 -- # size=4096 00:04:26.295 06:44:33 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.295 06:44:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:26.295 06:44:33 -- common/autotest_common.sh@877 -- # return 0 00:04:26.295 06:44:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.295 06:44:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.295 06:44:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.295 06:44:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.295 06:44:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:26.553 { 00:04:26.553 "nbd_device": "/dev/nbd0", 00:04:26.553 "bdev_name": "Malloc0" 00:04:26.553 }, 00:04:26.553 { 00:04:26.553 "nbd_device": "/dev/nbd1", 00:04:26.553 "bdev_name": "Malloc1" 00:04:26.553 } 00:04:26.553 ]' 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:26.553 { 00:04:26.553 "nbd_device": "/dev/nbd0", 00:04:26.553 "bdev_name": "Malloc0" 00:04:26.553 }, 00:04:26.553 { 00:04:26.553 "nbd_device": "/dev/nbd1", 00:04:26.553 "bdev_name": "Malloc1" 00:04:26.553 } 00:04:26.553 ]' 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:26.553 /dev/nbd1' 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:26.553 /dev/nbd1' 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@65 -- # count=2 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@95 -- # count=2 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:26.553 06:44:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:26.810 256+0 records in 00:04:26.810 256+0 records out 00:04:26.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500748 s, 209 MB/s 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:26.810 256+0 records in 00:04:26.810 256+0 records out 00:04:26.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204946 s, 51.2 MB/s 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:26.810 256+0 records in 00:04:26.810 256+0 records out 00:04:26.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025344 s, 41.4 MB/s 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@51 -- # local i 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.810 06:44:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:27.068 06:44:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:27.068 06:44:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:27.068 06:44:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:27.068 06:44:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.068 06:44:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.068 06:44:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:27.068 06:44:34 -- bdev/nbd_common.sh@41 -- # break 00:04:27.068 06:44:34 -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.068 06:44:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.068 06:44:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:27.326 06:44:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:27.326 06:44:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:27.326 06:44:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:27.326 06:44:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.326 06:44:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.326 06:44:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:27.326 06:44:34 -- bdev/nbd_common.sh@41 -- # break 00:04:27.326 06:44:34 -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.326 06:44:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.326 06:44:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.326 06:44:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.583 06:44:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:27.583 06:44:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:27.583 06:44:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.583 06:44:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:27.583 06:44:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:27.583 06:44:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.583 06:44:34 -- bdev/nbd_common.sh@65 -- # true 00:04:27.583 06:44:34 -- bdev/nbd_common.sh@65 -- # count=0 00:04:27.583 06:44:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:27.583 06:44:34 -- bdev/nbd_common.sh@104 -- # count=0 00:04:27.583 06:44:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:27.583 06:44:34 -- bdev/nbd_common.sh@109 -- # return 0 00:04:27.583 06:44:34 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:27.841 06:44:34 -- event/event.sh@35 -- # sleep 3 00:04:28.099 [2024-05-12 06:44:35.079965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.099 [2024-05-12 06:44:35.190929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.099 [2024-05-12 06:44:35.190929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.357 [2024-05-12 06:44:35.251245] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:28.357 [2024-05-12 06:44:35.251316] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:30.881 06:44:37 -- event/event.sh@23 -- # for i in {0..2} 00:04:30.881 06:44:37 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:30.881 spdk_app_start Round 1 00:04:30.881 06:44:37 -- event/event.sh@25 -- # waitforlisten 2910633 /var/tmp/spdk-nbd.sock 00:04:30.881 06:44:37 -- common/autotest_common.sh@819 -- # '[' -z 2910633 ']' 00:04:30.881 06:44:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.881 06:44:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:30.881 06:44:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.881 06:44:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:30.881 06:44:37 -- common/autotest_common.sh@10 -- # set +x 00:04:31.139 06:44:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:31.139 06:44:38 -- common/autotest_common.sh@852 -- # return 0 00:04:31.139 06:44:38 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.396 Malloc0 00:04:31.396 06:44:38 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.654 Malloc1 00:04:31.654 06:44:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@12 -- # local i 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.654 06:44:38 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:31.912 /dev/nbd0 00:04:31.912 06:44:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:31.912 06:44:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:31.912 06:44:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:31.912 06:44:38 -- common/autotest_common.sh@857 -- # local i 00:04:31.912 06:44:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:31.912 06:44:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:31.912 06:44:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:31.912 06:44:38 -- common/autotest_common.sh@861 -- # break 00:04:31.912 06:44:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:31.912 06:44:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:31.912 06:44:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.912 1+0 records in 00:04:31.912 1+0 records out 00:04:31.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015907 s, 25.7 MB/s 00:04:31.912 06:44:38 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.912 06:44:38 -- common/autotest_common.sh@874 -- # size=4096 00:04:31.912 06:44:38 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.912 06:44:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:31.912 06:44:38 -- common/autotest_common.sh@877 -- # return 0 00:04:31.912 06:44:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.912 06:44:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.912 06:44:38 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:32.169 /dev/nbd1 00:04:32.169 06:44:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:32.169 06:44:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:32.169 06:44:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:32.169 06:44:39 -- common/autotest_common.sh@857 -- # local i 00:04:32.169 06:44:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:32.169 06:44:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:32.169 06:44:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:32.169 06:44:39 -- common/autotest_common.sh@861 -- # break 00:04:32.169 06:44:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:32.169 06:44:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:32.169 06:44:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.170 1+0 records in 00:04:32.170 1+0 records out 00:04:32.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207897 s, 19.7 MB/s 00:04:32.170 06:44:39 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.170 06:44:39 -- common/autotest_common.sh@874 -- # size=4096 00:04:32.170 06:44:39 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.170 06:44:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:32.170 06:44:39 -- common/autotest_common.sh@877 -- # return 0 00:04:32.170 06:44:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.170 06:44:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.170 06:44:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.170 06:44:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.170 06:44:39 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:32.427 { 00:04:32.427 "nbd_device": "/dev/nbd0", 00:04:32.427 "bdev_name": "Malloc0" 00:04:32.427 }, 00:04:32.427 { 00:04:32.427 "nbd_device": "/dev/nbd1", 00:04:32.427 "bdev_name": "Malloc1" 00:04:32.427 } 00:04:32.427 ]' 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:32.427 { 00:04:32.427 "nbd_device": "/dev/nbd0", 00:04:32.427 "bdev_name": "Malloc0" 00:04:32.427 }, 00:04:32.427 { 00:04:32.427 "nbd_device": "/dev/nbd1", 00:04:32.427 "bdev_name": "Malloc1" 00:04:32.427 } 00:04:32.427 ]' 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:32.427 /dev/nbd1' 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:32.427 /dev/nbd1' 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@65 -- # count=2 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@95 -- # count=2 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:32.427 06:44:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:32.428 256+0 records in 00:04:32.428 256+0 records out 00:04:32.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506386 s, 207 MB/s 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:32.428 256+0 records in 00:04:32.428 256+0 records out 00:04:32.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212127 s, 49.4 MB/s 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:32.428 256+0 records in 00:04:32.428 256+0 records out 00:04:32.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225545 s, 46.5 MB/s 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@51 -- # local i 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.428 06:44:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:32.686 06:44:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:32.686 06:44:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:32.686 06:44:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:32.686 06:44:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.686 06:44:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.686 06:44:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:32.686 06:44:39 -- bdev/nbd_common.sh@41 -- # break 00:04:32.686 06:44:39 -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.686 06:44:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.686 06:44:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.944 06:44:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.944 06:44:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.944 06:44:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.944 06:44:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.944 06:44:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.944 06:44:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.944 06:44:39 -- bdev/nbd_common.sh@41 -- # break 00:04:32.944 06:44:39 -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.944 06:44:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.944 06:44:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.944 06:44:39 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.202 06:44:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:33.202 06:44:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:33.202 06:44:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.202 06:44:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:33.202 06:44:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:33.202 06:44:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.202 06:44:40 -- bdev/nbd_common.sh@65 -- # true 00:04:33.202 06:44:40 -- bdev/nbd_common.sh@65 -- # count=0 00:04:33.202 06:44:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:33.202 06:44:40 -- bdev/nbd_common.sh@104 -- # count=0 00:04:33.202 06:44:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:33.202 06:44:40 -- bdev/nbd_common.sh@109 -- # return 0 00:04:33.202 06:44:40 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:33.460 06:44:40 -- event/event.sh@35 -- # sleep 3 00:04:33.718 [2024-05-12 06:44:40.804285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.976 [2024-05-12 06:44:40.919374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.976 [2024-05-12 06:44:40.919379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.976 [2024-05-12 06:44:40.976958] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:33.976 [2024-05-12 06:44:40.977056] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:36.500 06:44:43 -- event/event.sh@23 -- # for i in {0..2} 00:04:36.500 06:44:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:36.500 spdk_app_start Round 2 00:04:36.500 06:44:43 -- event/event.sh@25 -- # waitforlisten 2910633 /var/tmp/spdk-nbd.sock 00:04:36.500 06:44:43 -- common/autotest_common.sh@819 -- # '[' -z 2910633 ']' 00:04:36.500 06:44:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.500 06:44:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:36.500 06:44:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.500 06:44:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:36.500 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:04:36.758 06:44:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:36.758 06:44:43 -- common/autotest_common.sh@852 -- # return 0 00:04:36.758 06:44:43 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.016 Malloc0 00:04:37.016 06:44:44 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.274 Malloc1 00:04:37.274 06:44:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.274 06:44:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@12 -- # local i 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.275 06:44:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:37.532 /dev/nbd0 00:04:37.532 06:44:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:37.532 06:44:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:37.532 06:44:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:37.533 06:44:44 -- common/autotest_common.sh@857 -- # local i 00:04:37.533 06:44:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:37.533 06:44:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:37.533 06:44:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:37.533 06:44:44 -- common/autotest_common.sh@861 -- # break 00:04:37.533 06:44:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:37.533 06:44:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:37.533 06:44:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.533 1+0 records in 00:04:37.533 1+0 records out 00:04:37.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000129209 s, 31.7 MB/s 00:04:37.533 06:44:44 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.533 06:44:44 -- common/autotest_common.sh@874 -- # size=4096 00:04:37.533 06:44:44 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.533 06:44:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:37.533 06:44:44 -- common/autotest_common.sh@877 -- # return 0 00:04:37.533 06:44:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.533 06:44:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.533 06:44:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:37.791 /dev/nbd1 00:04:37.791 06:44:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:37.791 06:44:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:37.791 06:44:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:37.791 06:44:44 -- common/autotest_common.sh@857 -- # local i 00:04:37.791 06:44:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:37.791 06:44:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:37.791 06:44:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:37.791 06:44:44 -- common/autotest_common.sh@861 -- # break 00:04:37.791 06:44:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:37.791 06:44:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:37.791 06:44:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.791 1+0 records in 00:04:37.791 1+0 records out 00:04:37.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174216 s, 23.5 MB/s 00:04:37.791 06:44:44 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.791 06:44:44 -- common/autotest_common.sh@874 -- # size=4096 00:04:37.791 06:44:44 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.791 06:44:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:37.791 06:44:44 -- common/autotest_common.sh@877 -- # return 0 00:04:37.791 06:44:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.791 06:44:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.791 06:44:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.791 06:44:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.791 06:44:44 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:38.049 { 00:04:38.049 "nbd_device": "/dev/nbd0", 00:04:38.049 "bdev_name": "Malloc0" 00:04:38.049 }, 00:04:38.049 { 00:04:38.049 "nbd_device": "/dev/nbd1", 00:04:38.049 "bdev_name": "Malloc1" 00:04:38.049 } 00:04:38.049 ]' 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:38.049 { 00:04:38.049 "nbd_device": "/dev/nbd0", 00:04:38.049 "bdev_name": "Malloc0" 00:04:38.049 }, 00:04:38.049 { 00:04:38.049 "nbd_device": "/dev/nbd1", 00:04:38.049 "bdev_name": "Malloc1" 00:04:38.049 } 00:04:38.049 ]' 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:38.049 /dev/nbd1' 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:38.049 /dev/nbd1' 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@65 -- # count=2 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@95 -- # count=2 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:38.049 256+0 records in 00:04:38.049 256+0 records out 00:04:38.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507739 s, 207 MB/s 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:38.049 256+0 records in 00:04:38.049 256+0 records out 00:04:38.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241278 s, 43.5 MB/s 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.049 06:44:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:38.307 256+0 records in 00:04:38.307 256+0 records out 00:04:38.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024821 s, 42.2 MB/s 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@51 -- # local i 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@41 -- # break 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.307 06:44:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:38.565 06:44:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:38.565 06:44:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:38.565 06:44:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:38.565 06:44:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.565 06:44:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.565 06:44:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:38.565 06:44:45 -- bdev/nbd_common.sh@41 -- # break 00:04:38.565 06:44:45 -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.565 06:44:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.565 06:44:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.565 06:44:45 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.824 06:44:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:38.824 06:44:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:38.824 06:44:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.081 06:44:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:39.081 06:44:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:39.081 06:44:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.081 06:44:45 -- bdev/nbd_common.sh@65 -- # true 00:04:39.081 06:44:45 -- bdev/nbd_common.sh@65 -- # count=0 00:04:39.081 06:44:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:39.081 06:44:45 -- bdev/nbd_common.sh@104 -- # count=0 00:04:39.081 06:44:45 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:39.081 06:44:45 -- bdev/nbd_common.sh@109 -- # return 0 00:04:39.081 06:44:45 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:39.339 06:44:46 -- event/event.sh@35 -- # sleep 3 00:04:39.597 [2024-05-12 06:44:46.510008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.597 [2024-05-12 06:44:46.623275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.597 [2024-05-12 06:44:46.623280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.597 [2024-05-12 06:44:46.683856] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:39.597 [2024-05-12 06:44:46.683921] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:42.157 06:44:49 -- event/event.sh@38 -- # waitforlisten 2910633 /var/tmp/spdk-nbd.sock 00:04:42.157 06:44:49 -- common/autotest_common.sh@819 -- # '[' -z 2910633 ']' 00:04:42.158 06:44:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:42.158 06:44:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:42.158 06:44:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:42.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:42.158 06:44:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:42.158 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:04:42.416 06:44:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:42.416 06:44:49 -- common/autotest_common.sh@852 -- # return 0 00:04:42.416 06:44:49 -- event/event.sh@39 -- # killprocess 2910633 00:04:42.416 06:44:49 -- common/autotest_common.sh@926 -- # '[' -z 2910633 ']' 00:04:42.416 06:44:49 -- common/autotest_common.sh@930 -- # kill -0 2910633 00:04:42.416 06:44:49 -- common/autotest_common.sh@931 -- # uname 00:04:42.416 06:44:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:42.416 06:44:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2910633 00:04:42.416 06:44:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:42.416 06:44:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:42.416 06:44:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2910633' 00:04:42.416 killing process with pid 2910633 00:04:42.416 06:44:49 -- common/autotest_common.sh@945 -- # kill 2910633 00:04:42.416 06:44:49 -- common/autotest_common.sh@950 -- # wait 2910633 00:04:42.675 spdk_app_start is called in Round 0. 00:04:42.675 Shutdown signal received, stop current app iteration 00:04:42.675 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:04:42.675 spdk_app_start is called in Round 1. 00:04:42.675 Shutdown signal received, stop current app iteration 00:04:42.675 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:04:42.675 spdk_app_start is called in Round 2. 00:04:42.675 Shutdown signal received, stop current app iteration 00:04:42.675 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:04:42.675 spdk_app_start is called in Round 3. 00:04:42.675 Shutdown signal received, stop current app iteration 00:04:42.675 06:44:49 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:42.675 06:44:49 -- event/event.sh@42 -- # return 0 00:04:42.675 00:04:42.675 real 0m18.412s 00:04:42.675 user 0m40.298s 00:04:42.675 sys 0m3.274s 00:04:42.675 06:44:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.675 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:04:42.675 ************************************ 00:04:42.675 END TEST app_repeat 00:04:42.675 ************************************ 00:04:42.675 06:44:49 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:42.675 06:44:49 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:42.675 06:44:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.675 06:44:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.675 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:04:42.675 ************************************ 00:04:42.675 START TEST cpu_locks 00:04:42.675 ************************************ 00:04:42.675 06:44:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:42.933 * Looking for test storage... 00:04:42.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:42.933 06:44:49 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:42.933 06:44:49 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:42.933 06:44:49 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:42.933 06:44:49 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:42.934 06:44:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.934 06:44:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.934 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:04:42.934 ************************************ 00:04:42.934 START TEST default_locks 00:04:42.934 ************************************ 00:04:42.934 06:44:49 -- common/autotest_common.sh@1104 -- # default_locks 00:04:42.934 06:44:49 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2913172 00:04:42.934 06:44:49 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.934 06:44:49 -- event/cpu_locks.sh@47 -- # waitforlisten 2913172 00:04:42.934 06:44:49 -- common/autotest_common.sh@819 -- # '[' -z 2913172 ']' 00:04:42.934 06:44:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.934 06:44:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:42.934 06:44:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.934 06:44:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:42.934 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:04:42.934 [2024-05-12 06:44:49.891931] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:42.934 [2024-05-12 06:44:49.892030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913172 ] 00:04:42.934 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.934 [2024-05-12 06:44:49.951432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.192 [2024-05-12 06:44:50.061986] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:43.192 [2024-05-12 06:44:50.062142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.758 06:44:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:43.759 06:44:50 -- common/autotest_common.sh@852 -- # return 0 00:04:43.759 06:44:50 -- event/cpu_locks.sh@49 -- # locks_exist 2913172 00:04:43.759 06:44:50 -- event/cpu_locks.sh@22 -- # lslocks -p 2913172 00:04:43.759 06:44:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.325 lslocks: write error 00:04:44.325 06:44:51 -- event/cpu_locks.sh@50 -- # killprocess 2913172 00:04:44.325 06:44:51 -- common/autotest_common.sh@926 -- # '[' -z 2913172 ']' 00:04:44.325 06:44:51 -- common/autotest_common.sh@930 -- # kill -0 2913172 00:04:44.325 06:44:51 -- common/autotest_common.sh@931 -- # uname 00:04:44.325 06:44:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:44.325 06:44:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2913172 00:04:44.325 06:44:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:44.325 06:44:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:44.325 06:44:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2913172' 00:04:44.325 killing process with pid 2913172 00:04:44.325 06:44:51 -- common/autotest_common.sh@945 -- # kill 2913172 00:04:44.325 06:44:51 -- common/autotest_common.sh@950 -- # wait 2913172 00:04:44.583 06:44:51 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2913172 00:04:44.583 06:44:51 -- common/autotest_common.sh@640 -- # local es=0 00:04:44.583 06:44:51 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2913172 00:04:44.583 06:44:51 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:44.583 06:44:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:44.583 06:44:51 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:44.583 06:44:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:44.583 06:44:51 -- common/autotest_common.sh@643 -- # waitforlisten 2913172 00:04:44.583 06:44:51 -- common/autotest_common.sh@819 -- # '[' -z 2913172 ']' 00:04:44.583 06:44:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.583 06:44:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:44.583 06:44:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.583 06:44:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:44.583 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:04:44.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2913172) - No such process 00:04:44.583 ERROR: process (pid: 2913172) is no longer running 00:04:44.583 06:44:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:44.583 06:44:51 -- common/autotest_common.sh@852 -- # return 1 00:04:44.583 06:44:51 -- common/autotest_common.sh@643 -- # es=1 00:04:44.583 06:44:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:44.583 06:44:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:44.583 06:44:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:44.583 06:44:51 -- event/cpu_locks.sh@54 -- # no_locks 00:04:44.583 06:44:51 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:44.583 06:44:51 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:44.583 06:44:51 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:44.583 00:04:44.583 real 0m1.808s 00:04:44.583 user 0m1.940s 00:04:44.583 sys 0m0.550s 00:04:44.583 06:44:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.583 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:04:44.583 ************************************ 00:04:44.584 END TEST default_locks 00:04:44.584 ************************************ 00:04:44.584 06:44:51 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:44.584 06:44:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.584 06:44:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.584 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:04:44.584 ************************************ 00:04:44.584 START TEST default_locks_via_rpc 00:04:44.584 ************************************ 00:04:44.584 06:44:51 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:04:44.584 06:44:51 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2913350 00:04:44.584 06:44:51 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.584 06:44:51 -- event/cpu_locks.sh@63 -- # waitforlisten 2913350 00:04:44.584 06:44:51 -- common/autotest_common.sh@819 -- # '[' -z 2913350 ']' 00:04:44.584 06:44:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.584 06:44:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:44.584 06:44:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.584 06:44:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:44.584 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:04:44.842 [2024-05-12 06:44:51.725236] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:44.842 [2024-05-12 06:44:51.725326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913350 ] 00:04:44.842 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.842 [2024-05-12 06:44:51.781901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.842 [2024-05-12 06:44:51.892424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:44.842 [2024-05-12 06:44:51.892575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.775 06:44:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:45.775 06:44:52 -- common/autotest_common.sh@852 -- # return 0 00:04:45.775 06:44:52 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:45.775 06:44:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.775 06:44:52 -- common/autotest_common.sh@10 -- # set +x 00:04:45.775 06:44:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.775 06:44:52 -- event/cpu_locks.sh@67 -- # no_locks 00:04:45.775 06:44:52 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:45.775 06:44:52 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:45.775 06:44:52 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:45.775 06:44:52 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:45.775 06:44:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.775 06:44:52 -- common/autotest_common.sh@10 -- # set +x 00:04:45.775 06:44:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.775 06:44:52 -- event/cpu_locks.sh@71 -- # locks_exist 2913350 00:04:45.775 06:44:52 -- event/cpu_locks.sh@22 -- # lslocks -p 2913350 00:04:45.775 06:44:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.033 06:44:53 -- event/cpu_locks.sh@73 -- # killprocess 2913350 00:04:46.033 06:44:53 -- common/autotest_common.sh@926 -- # '[' -z 2913350 ']' 00:04:46.033 06:44:53 -- common/autotest_common.sh@930 -- # kill -0 2913350 00:04:46.033 06:44:53 -- common/autotest_common.sh@931 -- # uname 00:04:46.033 06:44:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:46.033 06:44:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2913350 00:04:46.033 06:44:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:46.033 06:44:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:46.033 06:44:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2913350' 00:04:46.033 killing process with pid 2913350 00:04:46.033 06:44:53 -- common/autotest_common.sh@945 -- # kill 2913350 00:04:46.033 06:44:53 -- common/autotest_common.sh@950 -- # wait 2913350 00:04:46.598 00:04:46.598 real 0m1.819s 00:04:46.598 user 0m1.993s 00:04:46.598 sys 0m0.562s 00:04:46.598 06:44:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.598 06:44:53 -- common/autotest_common.sh@10 -- # set +x 00:04:46.598 ************************************ 00:04:46.598 END TEST default_locks_via_rpc 00:04:46.598 ************************************ 00:04:46.598 06:44:53 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:46.598 06:44:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.598 06:44:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.599 06:44:53 -- common/autotest_common.sh@10 -- # set +x 00:04:46.599 ************************************ 00:04:46.599 START TEST non_locking_app_on_locked_coremask 00:04:46.599 ************************************ 00:04:46.599 06:44:53 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:04:46.599 06:44:53 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2913646 00:04:46.599 06:44:53 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.599 06:44:53 -- event/cpu_locks.sh@81 -- # waitforlisten 2913646 /var/tmp/spdk.sock 00:04:46.599 06:44:53 -- common/autotest_common.sh@819 -- # '[' -z 2913646 ']' 00:04:46.599 06:44:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.599 06:44:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:46.599 06:44:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.599 06:44:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:46.599 06:44:53 -- common/autotest_common.sh@10 -- # set +x 00:04:46.599 [2024-05-12 06:44:53.569202] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:46.599 [2024-05-12 06:44:53.569291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913646 ] 00:04:46.599 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.599 [2024-05-12 06:44:53.626306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.857 [2024-05-12 06:44:53.734958] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.857 [2024-05-12 06:44:53.735105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.791 06:44:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:47.791 06:44:54 -- common/autotest_common.sh@852 -- # return 0 00:04:47.791 06:44:54 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2913786 00:04:47.791 06:44:54 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:47.791 06:44:54 -- event/cpu_locks.sh@85 -- # waitforlisten 2913786 /var/tmp/spdk2.sock 00:04:47.791 06:44:54 -- common/autotest_common.sh@819 -- # '[' -z 2913786 ']' 00:04:47.791 06:44:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.791 06:44:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:47.791 06:44:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.791 06:44:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:47.791 06:44:54 -- common/autotest_common.sh@10 -- # set +x 00:04:47.791 [2024-05-12 06:44:54.598489] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:47.791 [2024-05-12 06:44:54.598576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913786 ] 00:04:47.791 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.791 [2024-05-12 06:44:54.689379] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.791 [2024-05-12 06:44:54.689410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.050 [2024-05-12 06:44:54.924111] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:48.050 [2024-05-12 06:44:54.924277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.615 06:44:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:48.615 06:44:55 -- common/autotest_common.sh@852 -- # return 0 00:04:48.615 06:44:55 -- event/cpu_locks.sh@87 -- # locks_exist 2913646 00:04:48.615 06:44:55 -- event/cpu_locks.sh@22 -- # lslocks -p 2913646 00:04:48.615 06:44:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.873 lslocks: write error 00:04:48.874 06:44:55 -- event/cpu_locks.sh@89 -- # killprocess 2913646 00:04:48.874 06:44:55 -- common/autotest_common.sh@926 -- # '[' -z 2913646 ']' 00:04:48.874 06:44:55 -- common/autotest_common.sh@930 -- # kill -0 2913646 00:04:48.874 06:44:55 -- common/autotest_common.sh@931 -- # uname 00:04:48.874 06:44:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:48.874 06:44:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2913646 00:04:48.874 06:44:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:48.874 06:44:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:48.874 06:44:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2913646' 00:04:48.874 killing process with pid 2913646 00:04:48.874 06:44:55 -- common/autotest_common.sh@945 -- # kill 2913646 00:04:48.874 06:44:55 -- common/autotest_common.sh@950 -- # wait 2913646 00:04:49.807 06:44:56 -- event/cpu_locks.sh@90 -- # killprocess 2913786 00:04:49.807 06:44:56 -- common/autotest_common.sh@926 -- # '[' -z 2913786 ']' 00:04:49.807 06:44:56 -- common/autotest_common.sh@930 -- # kill -0 2913786 00:04:49.807 06:44:56 -- common/autotest_common.sh@931 -- # uname 00:04:49.807 06:44:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:49.807 06:44:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2913786 00:04:49.807 06:44:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:49.807 06:44:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:49.807 06:44:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2913786' 00:04:49.807 killing process with pid 2913786 00:04:49.807 06:44:56 -- common/autotest_common.sh@945 -- # kill 2913786 00:04:49.807 06:44:56 -- common/autotest_common.sh@950 -- # wait 2913786 00:04:50.373 00:04:50.373 real 0m3.769s 00:04:50.373 user 0m4.120s 00:04:50.373 sys 0m1.045s 00:04:50.373 06:44:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.373 06:44:57 -- common/autotest_common.sh@10 -- # set +x 00:04:50.373 ************************************ 00:04:50.373 END TEST non_locking_app_on_locked_coremask 00:04:50.373 ************************************ 00:04:50.373 06:44:57 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:50.373 06:44:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.373 06:44:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.373 06:44:57 -- common/autotest_common.sh@10 -- # set +x 00:04:50.373 ************************************ 00:04:50.373 START TEST locking_app_on_unlocked_coremask 00:04:50.373 ************************************ 00:04:50.373 06:44:57 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:04:50.373 06:44:57 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2914103 00:04:50.373 06:44:57 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:50.373 06:44:57 -- event/cpu_locks.sh@99 -- # waitforlisten 2914103 /var/tmp/spdk.sock 00:04:50.373 06:44:57 -- common/autotest_common.sh@819 -- # '[' -z 2914103 ']' 00:04:50.373 06:44:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.373 06:44:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:50.373 06:44:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.373 06:44:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:50.373 06:44:57 -- common/autotest_common.sh@10 -- # set +x 00:04:50.373 [2024-05-12 06:44:57.368406] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:50.373 [2024-05-12 06:44:57.368507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914103 ] 00:04:50.373 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.373 [2024-05-12 06:44:57.428910] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.373 [2024-05-12 06:44:57.428948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.631 [2024-05-12 06:44:57.552233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.631 [2024-05-12 06:44:57.552402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.195 06:44:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:51.195 06:44:58 -- common/autotest_common.sh@852 -- # return 0 00:04:51.195 06:44:58 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2914243 00:04:51.195 06:44:58 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:51.195 06:44:58 -- event/cpu_locks.sh@103 -- # waitforlisten 2914243 /var/tmp/spdk2.sock 00:04:51.195 06:44:58 -- common/autotest_common.sh@819 -- # '[' -z 2914243 ']' 00:04:51.195 06:44:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.195 06:44:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:51.195 06:44:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.195 06:44:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:51.195 06:44:58 -- common/autotest_common.sh@10 -- # set +x 00:04:51.453 [2024-05-12 06:44:58.352080] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:51.453 [2024-05-12 06:44:58.352159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914243 ] 00:04:51.453 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.453 [2024-05-12 06:44:58.448340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.710 [2024-05-12 06:44:58.680549] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:51.710 [2024-05-12 06:44:58.680726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.275 06:44:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:52.275 06:44:59 -- common/autotest_common.sh@852 -- # return 0 00:04:52.275 06:44:59 -- event/cpu_locks.sh@105 -- # locks_exist 2914243 00:04:52.275 06:44:59 -- event/cpu_locks.sh@22 -- # lslocks -p 2914243 00:04:52.275 06:44:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.841 lslocks: write error 00:04:52.841 06:44:59 -- event/cpu_locks.sh@107 -- # killprocess 2914103 00:04:52.841 06:44:59 -- common/autotest_common.sh@926 -- # '[' -z 2914103 ']' 00:04:52.841 06:44:59 -- common/autotest_common.sh@930 -- # kill -0 2914103 00:04:52.841 06:44:59 -- common/autotest_common.sh@931 -- # uname 00:04:52.841 06:44:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:52.841 06:44:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2914103 00:04:52.841 06:44:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:52.841 06:44:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:52.841 06:44:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2914103' 00:04:52.841 killing process with pid 2914103 00:04:52.841 06:44:59 -- common/autotest_common.sh@945 -- # kill 2914103 00:04:52.841 06:44:59 -- common/autotest_common.sh@950 -- # wait 2914103 00:04:53.775 06:45:00 -- event/cpu_locks.sh@108 -- # killprocess 2914243 00:04:53.775 06:45:00 -- common/autotest_common.sh@926 -- # '[' -z 2914243 ']' 00:04:53.775 06:45:00 -- common/autotest_common.sh@930 -- # kill -0 2914243 00:04:53.775 06:45:00 -- common/autotest_common.sh@931 -- # uname 00:04:53.775 06:45:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:53.775 06:45:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2914243 00:04:53.775 06:45:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:53.775 06:45:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:53.775 06:45:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2914243' 00:04:53.775 killing process with pid 2914243 00:04:53.775 06:45:00 -- common/autotest_common.sh@945 -- # kill 2914243 00:04:53.775 06:45:00 -- common/autotest_common.sh@950 -- # wait 2914243 00:04:54.033 00:04:54.033 real 0m3.732s 00:04:54.033 user 0m4.048s 00:04:54.033 sys 0m1.062s 00:04:54.033 06:45:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.033 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:04:54.033 ************************************ 00:04:54.033 END TEST locking_app_on_unlocked_coremask 00:04:54.033 ************************************ 00:04:54.033 06:45:01 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:54.033 06:45:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.033 06:45:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.033 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:04:54.033 ************************************ 00:04:54.033 START TEST locking_app_on_locked_coremask 00:04:54.033 ************************************ 00:04:54.033 06:45:01 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:04:54.033 06:45:01 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2914749 00:04:54.033 06:45:01 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.033 06:45:01 -- event/cpu_locks.sh@116 -- # waitforlisten 2914749 /var/tmp/spdk.sock 00:04:54.033 06:45:01 -- common/autotest_common.sh@819 -- # '[' -z 2914749 ']' 00:04:54.033 06:45:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.033 06:45:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.033 06:45:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.033 06:45:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.033 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:04:54.033 [2024-05-12 06:45:01.127372] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:54.033 [2024-05-12 06:45:01.127468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914749 ] 00:04:54.033 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.293 [2024-05-12 06:45:01.191298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.293 [2024-05-12 06:45:01.306736] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.293 [2024-05-12 06:45:01.306913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.229 06:45:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.229 06:45:02 -- common/autotest_common.sh@852 -- # return 0 00:04:55.229 06:45:02 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2914933 00:04:55.229 06:45:02 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:55.229 06:45:02 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2914933 /var/tmp/spdk2.sock 00:04:55.229 06:45:02 -- common/autotest_common.sh@640 -- # local es=0 00:04:55.229 06:45:02 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2914933 /var/tmp/spdk2.sock 00:04:55.229 06:45:02 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:55.229 06:45:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:55.229 06:45:02 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:55.229 06:45:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:55.229 06:45:02 -- common/autotest_common.sh@643 -- # waitforlisten 2914933 /var/tmp/spdk2.sock 00:04:55.229 06:45:02 -- common/autotest_common.sh@819 -- # '[' -z 2914933 ']' 00:04:55.229 06:45:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.229 06:45:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:55.229 06:45:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.230 06:45:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:55.230 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:04:55.230 [2024-05-12 06:45:02.111629] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:55.230 [2024-05-12 06:45:02.111738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914933 ] 00:04:55.230 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.230 [2024-05-12 06:45:02.208883] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2914749 has claimed it. 00:04:55.230 [2024-05-12 06:45:02.208942] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:55.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2914933) - No such process 00:04:55.798 ERROR: process (pid: 2914933) is no longer running 00:04:55.798 06:45:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.798 06:45:02 -- common/autotest_common.sh@852 -- # return 1 00:04:55.798 06:45:02 -- common/autotest_common.sh@643 -- # es=1 00:04:55.798 06:45:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:55.799 06:45:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:55.799 06:45:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:55.799 06:45:02 -- event/cpu_locks.sh@122 -- # locks_exist 2914749 00:04:55.799 06:45:02 -- event/cpu_locks.sh@22 -- # lslocks -p 2914749 00:04:55.799 06:45:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.088 lslocks: write error 00:04:56.088 06:45:03 -- event/cpu_locks.sh@124 -- # killprocess 2914749 00:04:56.088 06:45:03 -- common/autotest_common.sh@926 -- # '[' -z 2914749 ']' 00:04:56.088 06:45:03 -- common/autotest_common.sh@930 -- # kill -0 2914749 00:04:56.088 06:45:03 -- common/autotest_common.sh@931 -- # uname 00:04:56.088 06:45:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:56.088 06:45:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2914749 00:04:56.347 06:45:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:56.347 06:45:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:56.347 06:45:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2914749' 00:04:56.347 killing process with pid 2914749 00:04:56.347 06:45:03 -- common/autotest_common.sh@945 -- # kill 2914749 00:04:56.347 06:45:03 -- common/autotest_common.sh@950 -- # wait 2914749 00:04:56.608 00:04:56.608 real 0m2.580s 00:04:56.608 user 0m2.900s 00:04:56.608 sys 0m0.705s 00:04:56.608 06:45:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.608 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.608 ************************************ 00:04:56.608 END TEST locking_app_on_locked_coremask 00:04:56.608 ************************************ 00:04:56.608 06:45:03 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:56.608 06:45:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.608 06:45:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.608 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.608 ************************************ 00:04:56.608 START TEST locking_overlapped_coremask 00:04:56.608 ************************************ 00:04:56.608 06:45:03 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:04:56.608 06:45:03 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2915102 00:04:56.608 06:45:03 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:56.608 06:45:03 -- event/cpu_locks.sh@133 -- # waitforlisten 2915102 /var/tmp/spdk.sock 00:04:56.608 06:45:03 -- common/autotest_common.sh@819 -- # '[' -z 2915102 ']' 00:04:56.608 06:45:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.608 06:45:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:56.608 06:45:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.608 06:45:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:56.608 06:45:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.608 [2024-05-12 06:45:03.728209] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:56.608 [2024-05-12 06:45:03.728302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915102 ] 00:04:56.867 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.867 [2024-05-12 06:45:03.786715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.867 [2024-05-12 06:45:03.896714] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:56.867 [2024-05-12 06:45:03.896901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.867 [2024-05-12 06:45:03.900732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.867 [2024-05-12 06:45:03.900743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.804 06:45:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:57.804 06:45:04 -- common/autotest_common.sh@852 -- # return 0 00:04:57.804 06:45:04 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2915244 00:04:57.804 06:45:04 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2915244 /var/tmp/spdk2.sock 00:04:57.804 06:45:04 -- common/autotest_common.sh@640 -- # local es=0 00:04:57.804 06:45:04 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2915244 /var/tmp/spdk2.sock 00:04:57.804 06:45:04 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:57.804 06:45:04 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:57.804 06:45:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:57.804 06:45:04 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:57.804 06:45:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:57.804 06:45:04 -- common/autotest_common.sh@643 -- # waitforlisten 2915244 /var/tmp/spdk2.sock 00:04:57.804 06:45:04 -- common/autotest_common.sh@819 -- # '[' -z 2915244 ']' 00:04:57.804 06:45:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.804 06:45:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:57.804 06:45:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.804 06:45:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:57.804 06:45:04 -- common/autotest_common.sh@10 -- # set +x 00:04:57.804 [2024-05-12 06:45:04.721373] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:57.804 [2024-05-12 06:45:04.721454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915244 ] 00:04:57.804 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.804 [2024-05-12 06:45:04.808195] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2915102 has claimed it. 00:04:57.804 [2024-05-12 06:45:04.808254] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:58.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2915244) - No such process 00:04:58.372 ERROR: process (pid: 2915244) is no longer running 00:04:58.372 06:45:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.372 06:45:05 -- common/autotest_common.sh@852 -- # return 1 00:04:58.372 06:45:05 -- common/autotest_common.sh@643 -- # es=1 00:04:58.372 06:45:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:58.372 06:45:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:58.373 06:45:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:58.373 06:45:05 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:58.373 06:45:05 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:58.373 06:45:05 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:58.373 06:45:05 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:58.373 06:45:05 -- event/cpu_locks.sh@141 -- # killprocess 2915102 00:04:58.373 06:45:05 -- common/autotest_common.sh@926 -- # '[' -z 2915102 ']' 00:04:58.373 06:45:05 -- common/autotest_common.sh@930 -- # kill -0 2915102 00:04:58.373 06:45:05 -- common/autotest_common.sh@931 -- # uname 00:04:58.373 06:45:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:58.373 06:45:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2915102 00:04:58.373 06:45:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:58.373 06:45:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:58.373 06:45:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2915102' 00:04:58.373 killing process with pid 2915102 00:04:58.373 06:45:05 -- common/autotest_common.sh@945 -- # kill 2915102 00:04:58.373 06:45:05 -- common/autotest_common.sh@950 -- # wait 2915102 00:04:58.946 00:04:58.946 real 0m2.200s 00:04:58.946 user 0m6.187s 00:04:58.946 sys 0m0.479s 00:04:58.946 06:45:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.946 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:04:58.946 ************************************ 00:04:58.946 END TEST locking_overlapped_coremask 00:04:58.946 ************************************ 00:04:58.946 06:45:05 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:58.946 06:45:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.946 06:45:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.946 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:04:58.946 ************************************ 00:04:58.946 START TEST locking_overlapped_coremask_via_rpc 00:04:58.946 ************************************ 00:04:58.946 06:45:05 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:04:58.946 06:45:05 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2915410 00:04:58.946 06:45:05 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:58.946 06:45:05 -- event/cpu_locks.sh@149 -- # waitforlisten 2915410 /var/tmp/spdk.sock 00:04:58.946 06:45:05 -- common/autotest_common.sh@819 -- # '[' -z 2915410 ']' 00:04:58.946 06:45:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.946 06:45:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:58.946 06:45:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.946 06:45:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:58.946 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:04:58.947 [2024-05-12 06:45:05.953538] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:58.947 [2024-05-12 06:45:05.953620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915410 ] 00:04:58.947 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.947 [2024-05-12 06:45:06.015312] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.947 [2024-05-12 06:45:06.015352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.205 [2024-05-12 06:45:06.132483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.205 [2024-05-12 06:45:06.132903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.205 [2024-05-12 06:45:06.132946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.205 [2024-05-12 06:45:06.132949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.771 06:45:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:59.771 06:45:06 -- common/autotest_common.sh@852 -- # return 0 00:04:59.771 06:45:06 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2915553 00:04:59.771 06:45:06 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:59.771 06:45:06 -- event/cpu_locks.sh@153 -- # waitforlisten 2915553 /var/tmp/spdk2.sock 00:04:59.771 06:45:06 -- common/autotest_common.sh@819 -- # '[' -z 2915553 ']' 00:04:59.771 06:45:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.771 06:45:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:59.771 06:45:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.771 06:45:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:59.771 06:45:06 -- common/autotest_common.sh@10 -- # set +x 00:05:00.029 [2024-05-12 06:45:06.941564] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:00.029 [2024-05-12 06:45:06.941648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915553 ] 00:05:00.029 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.029 [2024-05-12 06:45:07.028076] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.029 [2024-05-12 06:45:07.028112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.289 [2024-05-12 06:45:07.245420] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:00.289 [2024-05-12 06:45:07.245690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.289 [2024-05-12 06:45:07.248792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:00.289 [2024-05-12 06:45:07.248795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.857 06:45:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:00.857 06:45:07 -- common/autotest_common.sh@852 -- # return 0 00:05:00.857 06:45:07 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:00.857 06:45:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:00.857 06:45:07 -- common/autotest_common.sh@10 -- # set +x 00:05:00.857 06:45:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:00.857 06:45:07 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.857 06:45:07 -- common/autotest_common.sh@640 -- # local es=0 00:05:00.857 06:45:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.857 06:45:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:00.857 06:45:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:00.857 06:45:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:00.857 06:45:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:00.857 06:45:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.857 06:45:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:00.857 06:45:07 -- common/autotest_common.sh@10 -- # set +x 00:05:00.857 [2024-05-12 06:45:07.898787] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2915410 has claimed it. 00:05:00.857 request: 00:05:00.857 { 00:05:00.857 "method": "framework_enable_cpumask_locks", 00:05:00.857 "req_id": 1 00:05:00.857 } 00:05:00.857 Got JSON-RPC error response 00:05:00.857 response: 00:05:00.857 { 00:05:00.857 "code": -32603, 00:05:00.857 "message": "Failed to claim CPU core: 2" 00:05:00.857 } 00:05:00.857 06:45:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:00.857 06:45:07 -- common/autotest_common.sh@643 -- # es=1 00:05:00.857 06:45:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:00.857 06:45:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:00.857 06:45:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:00.857 06:45:07 -- event/cpu_locks.sh@158 -- # waitforlisten 2915410 /var/tmp/spdk.sock 00:05:00.857 06:45:07 -- common/autotest_common.sh@819 -- # '[' -z 2915410 ']' 00:05:00.857 06:45:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.857 06:45:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:00.857 06:45:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.857 06:45:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:00.857 06:45:07 -- common/autotest_common.sh@10 -- # set +x 00:05:01.115 06:45:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:01.115 06:45:08 -- common/autotest_common.sh@852 -- # return 0 00:05:01.115 06:45:08 -- event/cpu_locks.sh@159 -- # waitforlisten 2915553 /var/tmp/spdk2.sock 00:05:01.115 06:45:08 -- common/autotest_common.sh@819 -- # '[' -z 2915553 ']' 00:05:01.115 06:45:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.115 06:45:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:01.115 06:45:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.115 06:45:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:01.115 06:45:08 -- common/autotest_common.sh@10 -- # set +x 00:05:01.375 06:45:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:01.375 06:45:08 -- common/autotest_common.sh@852 -- # return 0 00:05:01.375 06:45:08 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:01.375 06:45:08 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:01.375 06:45:08 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:01.375 06:45:08 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:01.375 00:05:01.375 real 0m2.481s 00:05:01.375 user 0m1.195s 00:05:01.375 sys 0m0.219s 00:05:01.375 06:45:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.375 06:45:08 -- common/autotest_common.sh@10 -- # set +x 00:05:01.375 ************************************ 00:05:01.375 END TEST locking_overlapped_coremask_via_rpc 00:05:01.375 ************************************ 00:05:01.375 06:45:08 -- event/cpu_locks.sh@174 -- # cleanup 00:05:01.375 06:45:08 -- event/cpu_locks.sh@15 -- # [[ -z 2915410 ]] 00:05:01.375 06:45:08 -- event/cpu_locks.sh@15 -- # killprocess 2915410 00:05:01.375 06:45:08 -- common/autotest_common.sh@926 -- # '[' -z 2915410 ']' 00:05:01.375 06:45:08 -- common/autotest_common.sh@930 -- # kill -0 2915410 00:05:01.375 06:45:08 -- common/autotest_common.sh@931 -- # uname 00:05:01.375 06:45:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:01.375 06:45:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2915410 00:05:01.375 06:45:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:01.375 06:45:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:01.375 06:45:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2915410' 00:05:01.375 killing process with pid 2915410 00:05:01.375 06:45:08 -- common/autotest_common.sh@945 -- # kill 2915410 00:05:01.375 06:45:08 -- common/autotest_common.sh@950 -- # wait 2915410 00:05:01.941 06:45:08 -- event/cpu_locks.sh@16 -- # [[ -z 2915553 ]] 00:05:01.941 06:45:08 -- event/cpu_locks.sh@16 -- # killprocess 2915553 00:05:01.941 06:45:08 -- common/autotest_common.sh@926 -- # '[' -z 2915553 ']' 00:05:01.941 06:45:08 -- common/autotest_common.sh@930 -- # kill -0 2915553 00:05:01.941 06:45:08 -- common/autotest_common.sh@931 -- # uname 00:05:01.941 06:45:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:01.941 06:45:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2915553 00:05:01.941 06:45:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:01.941 06:45:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:01.941 06:45:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2915553' 00:05:01.941 killing process with pid 2915553 00:05:01.941 06:45:08 -- common/autotest_common.sh@945 -- # kill 2915553 00:05:01.941 06:45:08 -- common/autotest_common.sh@950 -- # wait 2915553 00:05:02.199 06:45:09 -- event/cpu_locks.sh@18 -- # rm -f 00:05:02.199 06:45:09 -- event/cpu_locks.sh@1 -- # cleanup 00:05:02.199 06:45:09 -- event/cpu_locks.sh@15 -- # [[ -z 2915410 ]] 00:05:02.199 06:45:09 -- event/cpu_locks.sh@15 -- # killprocess 2915410 00:05:02.199 06:45:09 -- common/autotest_common.sh@926 -- # '[' -z 2915410 ']' 00:05:02.199 06:45:09 -- common/autotest_common.sh@930 -- # kill -0 2915410 00:05:02.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2915410) - No such process 00:05:02.199 06:45:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2915410 is not found' 00:05:02.199 Process with pid 2915410 is not found 00:05:02.199 06:45:09 -- event/cpu_locks.sh@16 -- # [[ -z 2915553 ]] 00:05:02.199 06:45:09 -- event/cpu_locks.sh@16 -- # killprocess 2915553 00:05:02.199 06:45:09 -- common/autotest_common.sh@926 -- # '[' -z 2915553 ']' 00:05:02.199 06:45:09 -- common/autotest_common.sh@930 -- # kill -0 2915553 00:05:02.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2915553) - No such process 00:05:02.199 06:45:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2915553 is not found' 00:05:02.199 Process with pid 2915553 is not found 00:05:02.199 06:45:09 -- event/cpu_locks.sh@18 -- # rm -f 00:05:02.199 00:05:02.199 real 0m19.522s 00:05:02.199 user 0m34.548s 00:05:02.199 sys 0m5.415s 00:05:02.199 06:45:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.199 06:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:02.199 ************************************ 00:05:02.199 END TEST cpu_locks 00:05:02.199 ************************************ 00:05:02.459 00:05:02.459 real 0m44.915s 00:05:02.459 user 1m25.228s 00:05:02.459 sys 0m9.403s 00:05:02.459 06:45:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.459 06:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:02.459 ************************************ 00:05:02.459 END TEST event 00:05:02.459 ************************************ 00:05:02.459 06:45:09 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:02.459 06:45:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.459 06:45:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.459 06:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:02.459 ************************************ 00:05:02.459 START TEST thread 00:05:02.459 ************************************ 00:05:02.459 06:45:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:02.459 * Looking for test storage... 00:05:02.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:02.459 06:45:09 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:02.459 06:45:09 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:02.459 06:45:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.459 06:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:02.459 ************************************ 00:05:02.459 START TEST thread_poller_perf 00:05:02.459 ************************************ 00:05:02.459 06:45:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:02.459 [2024-05-12 06:45:09.421382] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:02.459 [2024-05-12 06:45:09.421467] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916431 ] 00:05:02.459 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.459 [2024-05-12 06:45:09.479981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.718 [2024-05-12 06:45:09.589803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.718 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:03.655 ====================================== 00:05:03.655 busy:2713492170 (cyc) 00:05:03.655 total_run_count: 282000 00:05:03.655 tsc_hz: 2700000000 (cyc) 00:05:03.655 ====================================== 00:05:03.655 poller_cost: 9622 (cyc), 3563 (nsec) 00:05:03.655 00:05:03.655 real 0m1.318s 00:05:03.655 user 0m1.231s 00:05:03.655 sys 0m0.080s 00:05:03.655 06:45:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.655 06:45:10 -- common/autotest_common.sh@10 -- # set +x 00:05:03.655 ************************************ 00:05:03.655 END TEST thread_poller_perf 00:05:03.655 ************************************ 00:05:03.655 06:45:10 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.655 06:45:10 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:03.655 06:45:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.655 06:45:10 -- common/autotest_common.sh@10 -- # set +x 00:05:03.655 ************************************ 00:05:03.655 START TEST thread_poller_perf 00:05:03.655 ************************************ 00:05:03.655 06:45:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.655 [2024-05-12 06:45:10.766423] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:03.655 [2024-05-12 06:45:10.766512] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916585 ] 00:05:03.913 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.913 [2024-05-12 06:45:10.831074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.913 [2024-05-12 06:45:10.944048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.913 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:05.293 ====================================== 00:05:05.293 busy:2703334831 (cyc) 00:05:05.293 total_run_count: 3856000 00:05:05.293 tsc_hz: 2700000000 (cyc) 00:05:05.293 ====================================== 00:05:05.293 poller_cost: 701 (cyc), 259 (nsec) 00:05:05.293 00:05:05.293 real 0m1.316s 00:05:05.293 user 0m1.228s 00:05:05.293 sys 0m0.082s 00:05:05.293 06:45:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.293 06:45:12 -- common/autotest_common.sh@10 -- # set +x 00:05:05.293 ************************************ 00:05:05.293 END TEST thread_poller_perf 00:05:05.293 ************************************ 00:05:05.293 06:45:12 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:05.293 00:05:05.293 real 0m2.735s 00:05:05.293 user 0m2.505s 00:05:05.293 sys 0m0.231s 00:05:05.293 06:45:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.293 06:45:12 -- common/autotest_common.sh@10 -- # set +x 00:05:05.293 ************************************ 00:05:05.293 END TEST thread 00:05:05.293 ************************************ 00:05:05.293 06:45:12 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:05.293 06:45:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:05.293 06:45:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.293 06:45:12 -- common/autotest_common.sh@10 -- # set +x 00:05:05.293 ************************************ 00:05:05.293 START TEST accel 00:05:05.293 ************************************ 00:05:05.293 06:45:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:05.293 * Looking for test storage... 00:05:05.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:05.293 06:45:12 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:05.293 06:45:12 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:05.293 06:45:12 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:05.293 06:45:12 -- accel/accel.sh@59 -- # spdk_tgt_pid=2916872 00:05:05.293 06:45:12 -- accel/accel.sh@60 -- # waitforlisten 2916872 00:05:05.293 06:45:12 -- common/autotest_common.sh@819 -- # '[' -z 2916872 ']' 00:05:05.293 06:45:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.293 06:45:12 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:05.293 06:45:12 -- accel/accel.sh@58 -- # build_accel_config 00:05:05.293 06:45:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:05.293 06:45:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:05.293 06:45:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.293 06:45:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.293 06:45:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:05.293 06:45:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.293 06:45:12 -- common/autotest_common.sh@10 -- # set +x 00:05:05.293 06:45:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:05.293 06:45:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:05.293 06:45:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:05.293 06:45:12 -- accel/accel.sh@42 -- # jq -r . 00:05:05.293 [2024-05-12 06:45:12.208145] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:05.293 [2024-05-12 06:45:12.208232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916872 ] 00:05:05.293 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.294 [2024-05-12 06:45:12.267593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.294 [2024-05-12 06:45:12.382769] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:05.294 [2024-05-12 06:45:12.382937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.231 06:45:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:06.231 06:45:13 -- common/autotest_common.sh@852 -- # return 0 00:05:06.231 06:45:13 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:06.231 06:45:13 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:06.231 06:45:13 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:06.231 06:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:06.231 06:45:13 -- common/autotest_common.sh@10 -- # set +x 00:05:06.231 06:45:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # IFS== 00:05:06.231 06:45:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.231 06:45:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.231 06:45:13 -- accel/accel.sh@67 -- # killprocess 2916872 00:05:06.231 06:45:13 -- common/autotest_common.sh@926 -- # '[' -z 2916872 ']' 00:05:06.231 06:45:13 -- common/autotest_common.sh@930 -- # kill -0 2916872 00:05:06.231 06:45:13 -- common/autotest_common.sh@931 -- # uname 00:05:06.231 06:45:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:06.231 06:45:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2916872 00:05:06.231 06:45:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:06.231 06:45:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:06.231 06:45:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2916872' 00:05:06.231 killing process with pid 2916872 00:05:06.231 06:45:13 -- common/autotest_common.sh@945 -- # kill 2916872 00:05:06.232 06:45:13 -- common/autotest_common.sh@950 -- # wait 2916872 00:05:06.799 06:45:13 -- accel/accel.sh@68 -- # trap - ERR 00:05:06.799 06:45:13 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:06.799 06:45:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:06.799 06:45:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.799 06:45:13 -- common/autotest_common.sh@10 -- # set +x 00:05:06.799 06:45:13 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:06.799 06:45:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:06.799 06:45:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:06.799 06:45:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:06.799 06:45:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.799 06:45:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.799 06:45:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:06.799 06:45:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:06.799 06:45:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:06.799 06:45:13 -- accel/accel.sh@42 -- # jq -r . 00:05:06.799 06:45:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.799 06:45:13 -- common/autotest_common.sh@10 -- # set +x 00:05:06.799 06:45:13 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:06.799 06:45:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:06.799 06:45:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.799 06:45:13 -- common/autotest_common.sh@10 -- # set +x 00:05:06.799 ************************************ 00:05:06.799 START TEST accel_missing_filename 00:05:06.799 ************************************ 00:05:06.799 06:45:13 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:06.799 06:45:13 -- common/autotest_common.sh@640 -- # local es=0 00:05:06.799 06:45:13 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:06.799 06:45:13 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:06.799 06:45:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:06.799 06:45:13 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:06.799 06:45:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:06.799 06:45:13 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:06.799 06:45:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:06.799 06:45:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:06.799 06:45:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:06.799 06:45:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.799 06:45:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.799 06:45:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:06.799 06:45:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:06.799 06:45:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:06.799 06:45:13 -- accel/accel.sh@42 -- # jq -r . 00:05:06.799 [2024-05-12 06:45:13.728993] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:06.799 [2024-05-12 06:45:13.729068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917084 ] 00:05:06.799 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.799 [2024-05-12 06:45:13.790622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.799 [2024-05-12 06:45:13.906950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.058 [2024-05-12 06:45:13.968717] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.058 [2024-05-12 06:45:14.055882] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:07.058 A filename is required. 00:05:07.058 06:45:14 -- common/autotest_common.sh@643 -- # es=234 00:05:07.058 06:45:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:07.058 06:45:14 -- common/autotest_common.sh@652 -- # es=106 00:05:07.058 06:45:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:07.058 06:45:14 -- common/autotest_common.sh@660 -- # es=1 00:05:07.058 06:45:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:07.058 00:05:07.058 real 0m0.469s 00:05:07.058 user 0m0.360s 00:05:07.058 sys 0m0.142s 00:05:07.058 06:45:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.058 06:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.058 ************************************ 00:05:07.058 END TEST accel_missing_filename 00:05:07.058 ************************************ 00:05:07.316 06:45:14 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.316 06:45:14 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:07.316 06:45:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.316 06:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.316 ************************************ 00:05:07.316 START TEST accel_compress_verify 00:05:07.316 ************************************ 00:05:07.316 06:45:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.316 06:45:14 -- common/autotest_common.sh@640 -- # local es=0 00:05:07.316 06:45:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.316 06:45:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:07.316 06:45:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:07.316 06:45:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:07.316 06:45:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:07.316 06:45:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.316 06:45:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.316 06:45:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.316 06:45:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.316 06:45:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.316 06:45:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.316 06:45:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.316 06:45:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.316 06:45:14 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.316 06:45:14 -- accel/accel.sh@42 -- # jq -r . 00:05:07.316 [2024-05-12 06:45:14.226170] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:07.316 [2024-05-12 06:45:14.226244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917111 ] 00:05:07.316 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.316 [2024-05-12 06:45:14.287844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.316 [2024-05-12 06:45:14.404006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.575 [2024-05-12 06:45:14.463459] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.575 [2024-05-12 06:45:14.541157] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:07.575 00:05:07.575 Compression does not support the verify option, aborting. 00:05:07.575 06:45:14 -- common/autotest_common.sh@643 -- # es=161 00:05:07.575 06:45:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:07.575 06:45:14 -- common/autotest_common.sh@652 -- # es=33 00:05:07.575 06:45:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:07.575 06:45:14 -- common/autotest_common.sh@660 -- # es=1 00:05:07.575 06:45:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:07.575 00:05:07.575 real 0m0.453s 00:05:07.575 user 0m0.349s 00:05:07.575 sys 0m0.138s 00:05:07.575 06:45:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.575 06:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.575 ************************************ 00:05:07.575 END TEST accel_compress_verify 00:05:07.575 ************************************ 00:05:07.575 06:45:14 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:07.575 06:45:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:07.575 06:45:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.575 06:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.575 ************************************ 00:05:07.575 START TEST accel_wrong_workload 00:05:07.575 ************************************ 00:05:07.575 06:45:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:07.575 06:45:14 -- common/autotest_common.sh@640 -- # local es=0 00:05:07.575 06:45:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:07.575 06:45:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:07.575 06:45:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:07.575 06:45:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:07.575 06:45:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:07.575 06:45:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:07.575 06:45:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:07.575 06:45:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.575 06:45:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.575 06:45:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.575 06:45:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.575 06:45:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.575 06:45:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.575 06:45:14 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.575 06:45:14 -- accel/accel.sh@42 -- # jq -r . 00:05:07.575 Unsupported workload type: foobar 00:05:07.575 [2024-05-12 06:45:14.700948] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:07.836 accel_perf options: 00:05:07.836 [-h help message] 00:05:07.836 [-q queue depth per core] 00:05:07.836 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:07.836 [-T number of threads per core 00:05:07.836 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:07.836 [-t time in seconds] 00:05:07.836 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:07.836 [ dif_verify, , dif_generate, dif_generate_copy 00:05:07.836 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:07.836 [-l for compress/decompress workloads, name of uncompressed input file 00:05:07.836 [-S for crc32c workload, use this seed value (default 0) 00:05:07.836 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:07.836 [-f for fill workload, use this BYTE value (default 255) 00:05:07.836 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:07.836 [-y verify result if this switch is on] 00:05:07.836 [-a tasks to allocate per core (default: same value as -q)] 00:05:07.836 Can be used to spread operations across a wider range of memory. 00:05:07.836 06:45:14 -- common/autotest_common.sh@643 -- # es=1 00:05:07.836 06:45:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:07.836 06:45:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:07.836 06:45:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:07.836 00:05:07.836 real 0m0.020s 00:05:07.836 user 0m0.010s 00:05:07.836 sys 0m0.010s 00:05:07.836 06:45:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.836 06:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.836 ************************************ 00:05:07.836 END TEST accel_wrong_workload 00:05:07.836 ************************************ 00:05:07.836 Error: writing output failed: Broken pipe 00:05:07.836 06:45:14 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:07.836 06:45:14 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:07.836 06:45:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.836 06:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.836 ************************************ 00:05:07.836 START TEST accel_negative_buffers 00:05:07.836 ************************************ 00:05:07.836 06:45:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:07.836 06:45:14 -- common/autotest_common.sh@640 -- # local es=0 00:05:07.836 06:45:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:07.836 06:45:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:07.836 06:45:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:07.836 06:45:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:07.836 06:45:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:07.836 06:45:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:07.836 06:45:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:07.836 06:45:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.836 06:45:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.836 06:45:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.836 06:45:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.836 06:45:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.836 06:45:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.836 06:45:14 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.836 06:45:14 -- accel/accel.sh@42 -- # jq -r . 00:05:07.836 -x option must be non-negative. 00:05:07.836 [2024-05-12 06:45:14.750058] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:07.836 accel_perf options: 00:05:07.836 [-h help message] 00:05:07.836 [-q queue depth per core] 00:05:07.836 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:07.836 [-T number of threads per core 00:05:07.836 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:07.836 [-t time in seconds] 00:05:07.836 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:07.836 [ dif_verify, , dif_generate, dif_generate_copy 00:05:07.836 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:07.836 [-l for compress/decompress workloads, name of uncompressed input file 00:05:07.836 [-S for crc32c workload, use this seed value (default 0) 00:05:07.836 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:07.836 [-f for fill workload, use this BYTE value (default 255) 00:05:07.836 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:07.836 [-y verify result if this switch is on] 00:05:07.836 [-a tasks to allocate per core (default: same value as -q)] 00:05:07.836 Can be used to spread operations across a wider range of memory. 00:05:07.836 06:45:14 -- common/autotest_common.sh@643 -- # es=1 00:05:07.836 06:45:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:07.836 06:45:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:07.836 06:45:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:07.836 00:05:07.836 real 0m0.024s 00:05:07.836 user 0m0.011s 00:05:07.836 sys 0m0.013s 00:05:07.836 06:45:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.836 06:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.836 ************************************ 00:05:07.836 END TEST accel_negative_buffers 00:05:07.836 ************************************ 00:05:07.836 Error: writing output failed: Broken pipe 00:05:07.836 06:45:14 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:07.836 06:45:14 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:07.836 06:45:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.836 06:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.836 ************************************ 00:05:07.836 START TEST accel_crc32c 00:05:07.836 ************************************ 00:05:07.836 06:45:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:07.836 06:45:14 -- accel/accel.sh@16 -- # local accel_opc 00:05:07.836 06:45:14 -- accel/accel.sh@17 -- # local accel_module 00:05:07.836 06:45:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:07.836 06:45:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:07.836 06:45:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.836 06:45:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.836 06:45:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.836 06:45:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.836 06:45:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.836 06:45:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.836 06:45:14 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.836 06:45:14 -- accel/accel.sh@42 -- # jq -r . 00:05:07.836 [2024-05-12 06:45:14.793846] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:07.836 [2024-05-12 06:45:14.793904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917293 ] 00:05:07.836 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.836 [2024-05-12 06:45:14.857108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.095 [2024-05-12 06:45:14.974312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.474 06:45:16 -- accel/accel.sh@18 -- # out=' 00:05:09.474 SPDK Configuration: 00:05:09.474 Core mask: 0x1 00:05:09.474 00:05:09.474 Accel Perf Configuration: 00:05:09.474 Workload Type: crc32c 00:05:09.474 CRC-32C seed: 32 00:05:09.474 Transfer size: 4096 bytes 00:05:09.474 Vector count 1 00:05:09.474 Module: software 00:05:09.474 Queue depth: 32 00:05:09.474 Allocate depth: 32 00:05:09.474 # threads/core: 1 00:05:09.474 Run time: 1 seconds 00:05:09.474 Verify: Yes 00:05:09.474 00:05:09.474 Running for 1 seconds... 00:05:09.474 00:05:09.474 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:09.474 ------------------------------------------------------------------------------------ 00:05:09.474 0,0 405472/s 1583 MiB/s 0 0 00:05:09.474 ==================================================================================== 00:05:09.474 Total 405472/s 1583 MiB/s 0 0' 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:09.474 06:45:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.474 06:45:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:09.474 06:45:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.474 06:45:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.474 06:45:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:09.474 06:45:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:09.474 06:45:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:09.474 06:45:16 -- accel/accel.sh@42 -- # jq -r . 00:05:09.474 [2024-05-12 06:45:16.265826] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:09.474 [2024-05-12 06:45:16.265916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917437 ] 00:05:09.474 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.474 [2024-05-12 06:45:16.327232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.474 [2024-05-12 06:45:16.443473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val= 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val= 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val=0x1 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val= 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val= 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val=crc32c 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val=32 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val= 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val=software 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@23 -- # accel_module=software 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val=32 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val=32 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val=1 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val=Yes 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val= 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.474 06:45:16 -- accel/accel.sh@21 -- # val= 00:05:09.474 06:45:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # IFS=: 00:05:09.474 06:45:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.852 06:45:17 -- accel/accel.sh@21 -- # val= 00:05:10.852 06:45:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.852 06:45:17 -- accel/accel.sh@20 -- # IFS=: 00:05:10.852 06:45:17 -- accel/accel.sh@20 -- # read -r var val 00:05:10.852 06:45:17 -- accel/accel.sh@21 -- # val= 00:05:10.852 06:45:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.852 06:45:17 -- accel/accel.sh@20 -- # IFS=: 00:05:10.852 06:45:17 -- accel/accel.sh@20 -- # read -r var val 00:05:10.852 06:45:17 -- accel/accel.sh@21 -- # val= 00:05:10.852 06:45:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.852 06:45:17 -- accel/accel.sh@20 -- # IFS=: 00:05:10.852 06:45:17 -- accel/accel.sh@20 -- # read -r var val 00:05:10.852 06:45:17 -- accel/accel.sh@21 -- # val= 00:05:10.852 06:45:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.852 06:45:17 -- accel/accel.sh@20 -- # IFS=: 00:05:10.852 06:45:17 -- accel/accel.sh@20 -- # read -r var val 00:05:10.852 06:45:17 -- accel/accel.sh@21 -- # val= 00:05:10.852 06:45:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.852 06:45:17 -- accel/accel.sh@20 -- # IFS=: 00:05:10.852 06:45:17 -- accel/accel.sh@20 -- # read -r var val 00:05:10.852 06:45:17 -- accel/accel.sh@21 -- # val= 00:05:10.852 06:45:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.852 06:45:17 -- accel/accel.sh@20 -- # IFS=: 00:05:10.852 06:45:17 -- accel/accel.sh@20 -- # read -r var val 00:05:10.852 06:45:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:10.852 06:45:17 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:10.852 06:45:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.852 00:05:10.852 real 0m2.942s 00:05:10.852 user 0m2.647s 00:05:10.852 sys 0m0.288s 00:05:10.852 06:45:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.852 06:45:17 -- common/autotest_common.sh@10 -- # set +x 00:05:10.852 ************************************ 00:05:10.852 END TEST accel_crc32c 00:05:10.852 ************************************ 00:05:10.852 06:45:17 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:10.852 06:45:17 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:10.852 06:45:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.852 06:45:17 -- common/autotest_common.sh@10 -- # set +x 00:05:10.852 ************************************ 00:05:10.852 START TEST accel_crc32c_C2 00:05:10.852 ************************************ 00:05:10.852 06:45:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:10.852 06:45:17 -- accel/accel.sh@16 -- # local accel_opc 00:05:10.852 06:45:17 -- accel/accel.sh@17 -- # local accel_module 00:05:10.852 06:45:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:10.852 06:45:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:10.852 06:45:17 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.853 06:45:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:10.853 06:45:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.853 06:45:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.853 06:45:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:10.853 06:45:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:10.853 06:45:17 -- accel/accel.sh@41 -- # local IFS=, 00:05:10.853 06:45:17 -- accel/accel.sh@42 -- # jq -r . 00:05:10.853 [2024-05-12 06:45:17.759863] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:10.853 [2024-05-12 06:45:17.759932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917598 ] 00:05:10.853 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.853 [2024-05-12 06:45:17.820894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.853 [2024-05-12 06:45:17.938110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.255 06:45:19 -- accel/accel.sh@18 -- # out=' 00:05:12.255 SPDK Configuration: 00:05:12.255 Core mask: 0x1 00:05:12.255 00:05:12.255 Accel Perf Configuration: 00:05:12.255 Workload Type: crc32c 00:05:12.255 CRC-32C seed: 0 00:05:12.255 Transfer size: 4096 bytes 00:05:12.255 Vector count 2 00:05:12.255 Module: software 00:05:12.255 Queue depth: 32 00:05:12.255 Allocate depth: 32 00:05:12.255 # threads/core: 1 00:05:12.255 Run time: 1 seconds 00:05:12.255 Verify: Yes 00:05:12.255 00:05:12.255 Running for 1 seconds... 00:05:12.255 00:05:12.255 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:12.255 ------------------------------------------------------------------------------------ 00:05:12.255 0,0 319360/s 2495 MiB/s 0 0 00:05:12.255 ==================================================================================== 00:05:12.255 Total 319360/s 1247 MiB/s 0 0' 00:05:12.255 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.255 06:45:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:12.255 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.255 06:45:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:12.255 06:45:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:12.255 06:45:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:12.255 06:45:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.255 06:45:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.255 06:45:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:12.255 06:45:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:12.255 06:45:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:12.255 06:45:19 -- accel/accel.sh@42 -- # jq -r . 00:05:12.255 [2024-05-12 06:45:19.207127] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:12.255 [2024-05-12 06:45:19.207209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917861 ] 00:05:12.255 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.255 [2024-05-12 06:45:19.267266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.515 [2024-05-12 06:45:19.384352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val= 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val= 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val=0x1 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val= 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val= 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val=crc32c 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val=0 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val= 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val=software 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@23 -- # accel_module=software 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val=32 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val=32 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val=1 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val=Yes 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val= 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.515 06:45:19 -- accel/accel.sh@21 -- # val= 00:05:12.515 06:45:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # IFS=: 00:05:12.515 06:45:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.893 06:45:20 -- accel/accel.sh@21 -- # val= 00:05:13.893 06:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.893 06:45:20 -- accel/accel.sh@20 -- # IFS=: 00:05:13.893 06:45:20 -- accel/accel.sh@20 -- # read -r var val 00:05:13.893 06:45:20 -- accel/accel.sh@21 -- # val= 00:05:13.893 06:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.893 06:45:20 -- accel/accel.sh@20 -- # IFS=: 00:05:13.893 06:45:20 -- accel/accel.sh@20 -- # read -r var val 00:05:13.893 06:45:20 -- accel/accel.sh@21 -- # val= 00:05:13.893 06:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.893 06:45:20 -- accel/accel.sh@20 -- # IFS=: 00:05:13.893 06:45:20 -- accel/accel.sh@20 -- # read -r var val 00:05:13.893 06:45:20 -- accel/accel.sh@21 -- # val= 00:05:13.893 06:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.893 06:45:20 -- accel/accel.sh@20 -- # IFS=: 00:05:13.893 06:45:20 -- accel/accel.sh@20 -- # read -r var val 00:05:13.893 06:45:20 -- accel/accel.sh@21 -- # val= 00:05:13.893 06:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.893 06:45:20 -- accel/accel.sh@20 -- # IFS=: 00:05:13.893 06:45:20 -- accel/accel.sh@20 -- # read -r var val 00:05:13.893 06:45:20 -- accel/accel.sh@21 -- # val= 00:05:13.893 06:45:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.893 06:45:20 -- accel/accel.sh@20 -- # IFS=: 00:05:13.893 06:45:20 -- accel/accel.sh@20 -- # read -r var val 00:05:13.893 06:45:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:13.893 06:45:20 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:13.893 06:45:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.893 00:05:13.893 real 0m2.916s 00:05:13.893 user 0m2.633s 00:05:13.893 sys 0m0.274s 00:05:13.893 06:45:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.893 06:45:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.893 ************************************ 00:05:13.893 END TEST accel_crc32c_C2 00:05:13.893 ************************************ 00:05:13.893 06:45:20 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:13.893 06:45:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:13.893 06:45:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.893 06:45:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.893 ************************************ 00:05:13.893 START TEST accel_copy 00:05:13.893 ************************************ 00:05:13.893 06:45:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:13.893 06:45:20 -- accel/accel.sh@16 -- # local accel_opc 00:05:13.893 06:45:20 -- accel/accel.sh@17 -- # local accel_module 00:05:13.893 06:45:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:13.893 06:45:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:13.893 06:45:20 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.893 06:45:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:13.893 06:45:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.893 06:45:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.893 06:45:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:13.893 06:45:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:13.893 06:45:20 -- accel/accel.sh@41 -- # local IFS=, 00:05:13.893 06:45:20 -- accel/accel.sh@42 -- # jq -r . 00:05:13.893 [2024-05-12 06:45:20.700952] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:13.893 [2024-05-12 06:45:20.701036] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918022 ] 00:05:13.893 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.893 [2024-05-12 06:45:20.762591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.893 [2024-05-12 06:45:20.878854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.290 06:45:22 -- accel/accel.sh@18 -- # out=' 00:05:15.290 SPDK Configuration: 00:05:15.290 Core mask: 0x1 00:05:15.290 00:05:15.290 Accel Perf Configuration: 00:05:15.290 Workload Type: copy 00:05:15.290 Transfer size: 4096 bytes 00:05:15.290 Vector count 1 00:05:15.290 Module: software 00:05:15.290 Queue depth: 32 00:05:15.290 Allocate depth: 32 00:05:15.290 # threads/core: 1 00:05:15.290 Run time: 1 seconds 00:05:15.290 Verify: Yes 00:05:15.290 00:05:15.290 Running for 1 seconds... 00:05:15.290 00:05:15.290 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:15.290 ------------------------------------------------------------------------------------ 00:05:15.290 0,0 285792/s 1116 MiB/s 0 0 00:05:15.290 ==================================================================================== 00:05:15.290 Total 285792/s 1116 MiB/s 0 0' 00:05:15.290 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.290 06:45:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:15.290 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.290 06:45:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:15.290 06:45:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.290 06:45:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.290 06:45:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.290 06:45:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.290 06:45:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.290 06:45:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.290 06:45:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.290 06:45:22 -- accel/accel.sh@42 -- # jq -r . 00:05:15.290 [2024-05-12 06:45:22.171544] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:15.290 [2024-05-12 06:45:22.171625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918160 ] 00:05:15.290 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.290 [2024-05-12 06:45:22.233568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.290 [2024-05-12 06:45:22.353958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.550 06:45:22 -- accel/accel.sh@21 -- # val= 00:05:15.550 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.550 06:45:22 -- accel/accel.sh@21 -- # val= 00:05:15.550 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.550 06:45:22 -- accel/accel.sh@21 -- # val=0x1 00:05:15.550 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.550 06:45:22 -- accel/accel.sh@21 -- # val= 00:05:15.550 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.550 06:45:22 -- accel/accel.sh@21 -- # val= 00:05:15.550 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.550 06:45:22 -- accel/accel.sh@21 -- # val=copy 00:05:15.550 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.550 06:45:22 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.550 06:45:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:15.550 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.550 06:45:22 -- accel/accel.sh@21 -- # val= 00:05:15.550 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.550 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.550 06:45:22 -- accel/accel.sh@21 -- # val=software 00:05:15.550 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.551 06:45:22 -- accel/accel.sh@23 -- # accel_module=software 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.551 06:45:22 -- accel/accel.sh@21 -- # val=32 00:05:15.551 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.551 06:45:22 -- accel/accel.sh@21 -- # val=32 00:05:15.551 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.551 06:45:22 -- accel/accel.sh@21 -- # val=1 00:05:15.551 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.551 06:45:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:15.551 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.551 06:45:22 -- accel/accel.sh@21 -- # val=Yes 00:05:15.551 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.551 06:45:22 -- accel/accel.sh@21 -- # val= 00:05:15.551 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:15.551 06:45:22 -- accel/accel.sh@21 -- # val= 00:05:15.551 06:45:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # IFS=: 00:05:15.551 06:45:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.929 06:45:23 -- accel/accel.sh@21 -- # val= 00:05:16.929 06:45:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.929 06:45:23 -- accel/accel.sh@20 -- # IFS=: 00:05:16.929 06:45:23 -- accel/accel.sh@20 -- # read -r var val 00:05:16.929 06:45:23 -- accel/accel.sh@21 -- # val= 00:05:16.929 06:45:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.929 06:45:23 -- accel/accel.sh@20 -- # IFS=: 00:05:16.929 06:45:23 -- accel/accel.sh@20 -- # read -r var val 00:05:16.929 06:45:23 -- accel/accel.sh@21 -- # val= 00:05:16.929 06:45:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.929 06:45:23 -- accel/accel.sh@20 -- # IFS=: 00:05:16.929 06:45:23 -- accel/accel.sh@20 -- # read -r var val 00:05:16.929 06:45:23 -- accel/accel.sh@21 -- # val= 00:05:16.929 06:45:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.929 06:45:23 -- accel/accel.sh@20 -- # IFS=: 00:05:16.929 06:45:23 -- accel/accel.sh@20 -- # read -r var val 00:05:16.929 06:45:23 -- accel/accel.sh@21 -- # val= 00:05:16.929 06:45:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.929 06:45:23 -- accel/accel.sh@20 -- # IFS=: 00:05:16.929 06:45:23 -- accel/accel.sh@20 -- # read -r var val 00:05:16.929 06:45:23 -- accel/accel.sh@21 -- # val= 00:05:16.929 06:45:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.929 06:45:23 -- accel/accel.sh@20 -- # IFS=: 00:05:16.929 06:45:23 -- accel/accel.sh@20 -- # read -r var val 00:05:16.929 06:45:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:16.929 06:45:23 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:16.929 06:45:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.929 00:05:16.929 real 0m2.950s 00:05:16.929 user 0m2.650s 00:05:16.929 sys 0m0.291s 00:05:16.929 06:45:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.929 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:05:16.929 ************************************ 00:05:16.929 END TEST accel_copy 00:05:16.929 ************************************ 00:05:16.929 06:45:23 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:16.929 06:45:23 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:16.929 06:45:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.929 06:45:23 -- common/autotest_common.sh@10 -- # set +x 00:05:16.929 ************************************ 00:05:16.929 START TEST accel_fill 00:05:16.929 ************************************ 00:05:16.929 06:45:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:16.929 06:45:23 -- accel/accel.sh@16 -- # local accel_opc 00:05:16.929 06:45:23 -- accel/accel.sh@17 -- # local accel_module 00:05:16.929 06:45:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:16.929 06:45:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:16.929 06:45:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:16.929 06:45:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:16.929 06:45:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.929 06:45:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.929 06:45:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:16.929 06:45:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:16.929 06:45:23 -- accel/accel.sh@41 -- # local IFS=, 00:05:16.929 06:45:23 -- accel/accel.sh@42 -- # jq -r . 00:05:16.929 [2024-05-12 06:45:23.671337] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:16.929 [2024-05-12 06:45:23.671410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918446 ] 00:05:16.929 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.929 [2024-05-12 06:45:23.732629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.929 [2024-05-12 06:45:23.847655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.309 06:45:25 -- accel/accel.sh@18 -- # out=' 00:05:18.309 SPDK Configuration: 00:05:18.309 Core mask: 0x1 00:05:18.309 00:05:18.309 Accel Perf Configuration: 00:05:18.309 Workload Type: fill 00:05:18.309 Fill pattern: 0x80 00:05:18.309 Transfer size: 4096 bytes 00:05:18.309 Vector count 1 00:05:18.309 Module: software 00:05:18.309 Queue depth: 64 00:05:18.309 Allocate depth: 64 00:05:18.309 # threads/core: 1 00:05:18.309 Run time: 1 seconds 00:05:18.309 Verify: Yes 00:05:18.309 00:05:18.309 Running for 1 seconds... 00:05:18.309 00:05:18.309 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:18.309 ------------------------------------------------------------------------------------ 00:05:18.309 0,0 411648/s 1608 MiB/s 0 0 00:05:18.309 ==================================================================================== 00:05:18.309 Total 411648/s 1608 MiB/s 0 0' 00:05:18.309 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.309 06:45:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:18.309 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.309 06:45:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:18.309 06:45:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.309 06:45:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:18.309 06:45:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.309 06:45:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.309 06:45:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:18.309 06:45:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:18.309 06:45:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:18.309 06:45:25 -- accel/accel.sh@42 -- # jq -r . 00:05:18.309 [2024-05-12 06:45:25.134694] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:18.309 [2024-05-12 06:45:25.134822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918588 ] 00:05:18.309 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.309 [2024-05-12 06:45:25.195089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.309 [2024-05-12 06:45:25.311396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.309 06:45:25 -- accel/accel.sh@21 -- # val= 00:05:18.309 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.309 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.309 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.309 06:45:25 -- accel/accel.sh@21 -- # val= 00:05:18.309 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.309 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.309 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.309 06:45:25 -- accel/accel.sh@21 -- # val=0x1 00:05:18.309 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.309 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.309 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.309 06:45:25 -- accel/accel.sh@21 -- # val= 00:05:18.309 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.309 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.309 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.309 06:45:25 -- accel/accel.sh@21 -- # val= 00:05:18.309 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.310 06:45:25 -- accel/accel.sh@21 -- # val=fill 00:05:18.310 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.310 06:45:25 -- accel/accel.sh@21 -- # val=0x80 00:05:18.310 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.310 06:45:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:18.310 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.310 06:45:25 -- accel/accel.sh@21 -- # val= 00:05:18.310 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.310 06:45:25 -- accel/accel.sh@21 -- # val=software 00:05:18.310 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@23 -- # accel_module=software 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.310 06:45:25 -- accel/accel.sh@21 -- # val=64 00:05:18.310 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.310 06:45:25 -- accel/accel.sh@21 -- # val=64 00:05:18.310 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.310 06:45:25 -- accel/accel.sh@21 -- # val=1 00:05:18.310 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.310 06:45:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:18.310 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.310 06:45:25 -- accel/accel.sh@21 -- # val=Yes 00:05:18.310 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.310 06:45:25 -- accel/accel.sh@21 -- # val= 00:05:18.310 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:18.310 06:45:25 -- accel/accel.sh@21 -- # val= 00:05:18.310 06:45:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # IFS=: 00:05:18.310 06:45:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 06:45:26 -- accel/accel.sh@21 -- # val= 00:05:19.688 06:45:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 06:45:26 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 06:45:26 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 06:45:26 -- accel/accel.sh@21 -- # val= 00:05:19.688 06:45:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 06:45:26 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 06:45:26 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 06:45:26 -- accel/accel.sh@21 -- # val= 00:05:19.688 06:45:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 06:45:26 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 06:45:26 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 06:45:26 -- accel/accel.sh@21 -- # val= 00:05:19.688 06:45:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 06:45:26 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 06:45:26 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 06:45:26 -- accel/accel.sh@21 -- # val= 00:05:19.688 06:45:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 06:45:26 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 06:45:26 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 06:45:26 -- accel/accel.sh@21 -- # val= 00:05:19.688 06:45:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.688 06:45:26 -- accel/accel.sh@20 -- # IFS=: 00:05:19.688 06:45:26 -- accel/accel.sh@20 -- # read -r var val 00:05:19.688 06:45:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:19.688 06:45:26 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:19.688 06:45:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.688 00:05:19.688 real 0m2.936s 00:05:19.688 user 0m2.652s 00:05:19.689 sys 0m0.275s 00:05:19.689 06:45:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.689 06:45:26 -- common/autotest_common.sh@10 -- # set +x 00:05:19.689 ************************************ 00:05:19.689 END TEST accel_fill 00:05:19.689 ************************************ 00:05:19.689 06:45:26 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:19.689 06:45:26 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:19.689 06:45:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.689 06:45:26 -- common/autotest_common.sh@10 -- # set +x 00:05:19.689 ************************************ 00:05:19.689 START TEST accel_copy_crc32c 00:05:19.689 ************************************ 00:05:19.689 06:45:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:05:19.689 06:45:26 -- accel/accel.sh@16 -- # local accel_opc 00:05:19.689 06:45:26 -- accel/accel.sh@17 -- # local accel_module 00:05:19.689 06:45:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:19.689 06:45:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:19.689 06:45:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.689 06:45:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.689 06:45:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.689 06:45:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.689 06:45:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.689 06:45:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.689 06:45:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.689 06:45:26 -- accel/accel.sh@42 -- # jq -r . 00:05:19.689 [2024-05-12 06:45:26.635801] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:19.689 [2024-05-12 06:45:26.635884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918749 ] 00:05:19.689 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.689 [2024-05-12 06:45:26.699445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.948 [2024-05-12 06:45:26.818530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.327 06:45:28 -- accel/accel.sh@18 -- # out=' 00:05:21.327 SPDK Configuration: 00:05:21.327 Core mask: 0x1 00:05:21.327 00:05:21.327 Accel Perf Configuration: 00:05:21.327 Workload Type: copy_crc32c 00:05:21.327 CRC-32C seed: 0 00:05:21.327 Vector size: 4096 bytes 00:05:21.327 Transfer size: 4096 bytes 00:05:21.327 Vector count 1 00:05:21.327 Module: software 00:05:21.327 Queue depth: 32 00:05:21.327 Allocate depth: 32 00:05:21.327 # threads/core: 1 00:05:21.327 Run time: 1 seconds 00:05:21.327 Verify: Yes 00:05:21.327 00:05:21.327 Running for 1 seconds... 00:05:21.327 00:05:21.327 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:21.327 ------------------------------------------------------------------------------------ 00:05:21.327 0,0 220608/s 861 MiB/s 0 0 00:05:21.327 ==================================================================================== 00:05:21.327 Total 220608/s 861 MiB/s 0 0' 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:21.327 06:45:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.327 06:45:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:21.327 06:45:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.327 06:45:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.327 06:45:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:21.327 06:45:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:21.327 06:45:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:21.327 06:45:28 -- accel/accel.sh@42 -- # jq -r . 00:05:21.327 [2024-05-12 06:45:28.112223] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:21.327 [2024-05-12 06:45:28.112306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919006 ] 00:05:21.327 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.327 [2024-05-12 06:45:28.174621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.327 [2024-05-12 06:45:28.293621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val= 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val= 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val=0x1 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val= 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val= 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val=0 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val= 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val=software 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@23 -- # accel_module=software 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val=32 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val=32 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val=1 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val=Yes 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val= 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:21.327 06:45:28 -- accel/accel.sh@21 -- # val= 00:05:21.327 06:45:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # IFS=: 00:05:21.327 06:45:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.707 06:45:29 -- accel/accel.sh@21 -- # val= 00:05:22.707 06:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.707 06:45:29 -- accel/accel.sh@20 -- # IFS=: 00:05:22.707 06:45:29 -- accel/accel.sh@20 -- # read -r var val 00:05:22.707 06:45:29 -- accel/accel.sh@21 -- # val= 00:05:22.707 06:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.707 06:45:29 -- accel/accel.sh@20 -- # IFS=: 00:05:22.707 06:45:29 -- accel/accel.sh@20 -- # read -r var val 00:05:22.707 06:45:29 -- accel/accel.sh@21 -- # val= 00:05:22.707 06:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.707 06:45:29 -- accel/accel.sh@20 -- # IFS=: 00:05:22.707 06:45:29 -- accel/accel.sh@20 -- # read -r var val 00:05:22.707 06:45:29 -- accel/accel.sh@21 -- # val= 00:05:22.707 06:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.707 06:45:29 -- accel/accel.sh@20 -- # IFS=: 00:05:22.707 06:45:29 -- accel/accel.sh@20 -- # read -r var val 00:05:22.707 06:45:29 -- accel/accel.sh@21 -- # val= 00:05:22.707 06:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.707 06:45:29 -- accel/accel.sh@20 -- # IFS=: 00:05:22.707 06:45:29 -- accel/accel.sh@20 -- # read -r var val 00:05:22.707 06:45:29 -- accel/accel.sh@21 -- # val= 00:05:22.707 06:45:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.707 06:45:29 -- accel/accel.sh@20 -- # IFS=: 00:05:22.707 06:45:29 -- accel/accel.sh@20 -- # read -r var val 00:05:22.707 06:45:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:22.707 06:45:29 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:22.707 06:45:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.707 00:05:22.707 real 0m2.949s 00:05:22.707 user 0m2.665s 00:05:22.707 sys 0m0.276s 00:05:22.707 06:45:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.707 06:45:29 -- common/autotest_common.sh@10 -- # set +x 00:05:22.707 ************************************ 00:05:22.707 END TEST accel_copy_crc32c 00:05:22.707 ************************************ 00:05:22.707 06:45:29 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:22.707 06:45:29 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:22.707 06:45:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.707 06:45:29 -- common/autotest_common.sh@10 -- # set +x 00:05:22.707 ************************************ 00:05:22.707 START TEST accel_copy_crc32c_C2 00:05:22.707 ************************************ 00:05:22.707 06:45:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:22.707 06:45:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:22.707 06:45:29 -- accel/accel.sh@17 -- # local accel_module 00:05:22.707 06:45:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:22.707 06:45:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:22.707 06:45:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.707 06:45:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.707 06:45:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.707 06:45:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.707 06:45:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.707 06:45:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.707 06:45:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.707 06:45:29 -- accel/accel.sh@42 -- # jq -r . 00:05:22.707 [2024-05-12 06:45:29.607548] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:22.707 [2024-05-12 06:45:29.607626] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919177 ] 00:05:22.707 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.707 [2024-05-12 06:45:29.671634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.707 [2024-05-12 06:45:29.792032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.088 06:45:31 -- accel/accel.sh@18 -- # out=' 00:05:24.088 SPDK Configuration: 00:05:24.088 Core mask: 0x1 00:05:24.088 00:05:24.088 Accel Perf Configuration: 00:05:24.088 Workload Type: copy_crc32c 00:05:24.088 CRC-32C seed: 0 00:05:24.089 Vector size: 4096 bytes 00:05:24.089 Transfer size: 8192 bytes 00:05:24.089 Vector count 2 00:05:24.089 Module: software 00:05:24.089 Queue depth: 32 00:05:24.089 Allocate depth: 32 00:05:24.089 # threads/core: 1 00:05:24.089 Run time: 1 seconds 00:05:24.089 Verify: Yes 00:05:24.089 00:05:24.089 Running for 1 seconds... 00:05:24.089 00:05:24.089 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:24.089 ------------------------------------------------------------------------------------ 00:05:24.089 0,0 154720/s 1208 MiB/s 0 0 00:05:24.089 ==================================================================================== 00:05:24.089 Total 154720/s 604 MiB/s 0 0' 00:05:24.089 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.089 06:45:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:24.089 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.089 06:45:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:24.089 06:45:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.089 06:45:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:24.089 06:45:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.089 06:45:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.089 06:45:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:24.089 06:45:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:24.089 06:45:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:24.089 06:45:31 -- accel/accel.sh@42 -- # jq -r . 00:05:24.089 [2024-05-12 06:45:31.093736] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:24.089 [2024-05-12 06:45:31.093820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919315 ] 00:05:24.089 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.089 [2024-05-12 06:45:31.159478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.348 [2024-05-12 06:45:31.280449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val= 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val= 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val=0x1 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val= 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val= 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val=0 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val= 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val=software 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@23 -- # accel_module=software 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val=32 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val=32 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val=1 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val=Yes 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val= 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:24.348 06:45:31 -- accel/accel.sh@21 -- # val= 00:05:24.348 06:45:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # IFS=: 00:05:24.348 06:45:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.724 06:45:32 -- accel/accel.sh@21 -- # val= 00:05:25.724 06:45:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.724 06:45:32 -- accel/accel.sh@20 -- # IFS=: 00:05:25.724 06:45:32 -- accel/accel.sh@20 -- # read -r var val 00:05:25.724 06:45:32 -- accel/accel.sh@21 -- # val= 00:05:25.724 06:45:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.724 06:45:32 -- accel/accel.sh@20 -- # IFS=: 00:05:25.724 06:45:32 -- accel/accel.sh@20 -- # read -r var val 00:05:25.724 06:45:32 -- accel/accel.sh@21 -- # val= 00:05:25.724 06:45:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.724 06:45:32 -- accel/accel.sh@20 -- # IFS=: 00:05:25.724 06:45:32 -- accel/accel.sh@20 -- # read -r var val 00:05:25.724 06:45:32 -- accel/accel.sh@21 -- # val= 00:05:25.724 06:45:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.724 06:45:32 -- accel/accel.sh@20 -- # IFS=: 00:05:25.724 06:45:32 -- accel/accel.sh@20 -- # read -r var val 00:05:25.724 06:45:32 -- accel/accel.sh@21 -- # val= 00:05:25.724 06:45:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.724 06:45:32 -- accel/accel.sh@20 -- # IFS=: 00:05:25.724 06:45:32 -- accel/accel.sh@20 -- # read -r var val 00:05:25.724 06:45:32 -- accel/accel.sh@21 -- # val= 00:05:25.724 06:45:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.724 06:45:32 -- accel/accel.sh@20 -- # IFS=: 00:05:25.724 06:45:32 -- accel/accel.sh@20 -- # read -r var val 00:05:25.724 06:45:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:25.724 06:45:32 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:25.724 06:45:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.724 00:05:25.724 real 0m2.970s 00:05:25.724 user 0m2.666s 00:05:25.724 sys 0m0.295s 00:05:25.725 06:45:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.725 06:45:32 -- common/autotest_common.sh@10 -- # set +x 00:05:25.725 ************************************ 00:05:25.725 END TEST accel_copy_crc32c_C2 00:05:25.725 ************************************ 00:05:25.725 06:45:32 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:25.725 06:45:32 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:25.725 06:45:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.725 06:45:32 -- common/autotest_common.sh@10 -- # set +x 00:05:25.725 ************************************ 00:05:25.725 START TEST accel_dualcast 00:05:25.725 ************************************ 00:05:25.725 06:45:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:05:25.725 06:45:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.725 06:45:32 -- accel/accel.sh@17 -- # local accel_module 00:05:25.725 06:45:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:25.725 06:45:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:25.725 06:45:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.725 06:45:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.725 06:45:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.725 06:45:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.725 06:45:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.725 06:45:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.725 06:45:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.725 06:45:32 -- accel/accel.sh@42 -- # jq -r . 00:05:25.725 [2024-05-12 06:45:32.606681] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:25.725 [2024-05-12 06:45:32.606778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919564 ] 00:05:25.725 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.725 [2024-05-12 06:45:32.670812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.725 [2024-05-12 06:45:32.791240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.103 06:45:34 -- accel/accel.sh@18 -- # out=' 00:05:27.103 SPDK Configuration: 00:05:27.103 Core mask: 0x1 00:05:27.103 00:05:27.103 Accel Perf Configuration: 00:05:27.103 Workload Type: dualcast 00:05:27.103 Transfer size: 4096 bytes 00:05:27.103 Vector count 1 00:05:27.103 Module: software 00:05:27.103 Queue depth: 32 00:05:27.103 Allocate depth: 32 00:05:27.103 # threads/core: 1 00:05:27.103 Run time: 1 seconds 00:05:27.103 Verify: Yes 00:05:27.103 00:05:27.103 Running for 1 seconds... 00:05:27.103 00:05:27.103 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:27.103 ------------------------------------------------------------------------------------ 00:05:27.103 0,0 298144/s 1164 MiB/s 0 0 00:05:27.103 ==================================================================================== 00:05:27.103 Total 298144/s 1164 MiB/s 0 0' 00:05:27.103 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.103 06:45:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:27.103 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.103 06:45:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:27.103 06:45:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.103 06:45:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.103 06:45:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.103 06:45:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.103 06:45:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.103 06:45:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.103 06:45:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.103 06:45:34 -- accel/accel.sh@42 -- # jq -r . 00:05:27.103 [2024-05-12 06:45:34.088232] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:27.103 [2024-05-12 06:45:34.088325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919741 ] 00:05:27.103 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.103 [2024-05-12 06:45:34.151347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.363 [2024-05-12 06:45:34.270869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val= 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val= 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val=0x1 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val= 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val= 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val=dualcast 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val= 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val=software 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@23 -- # accel_module=software 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val=32 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val=32 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val=1 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val=Yes 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val= 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:27.363 06:45:34 -- accel/accel.sh@21 -- # val= 00:05:27.363 06:45:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # IFS=: 00:05:27.363 06:45:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.743 06:45:35 -- accel/accel.sh@21 -- # val= 00:05:28.743 06:45:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.743 06:45:35 -- accel/accel.sh@20 -- # IFS=: 00:05:28.743 06:45:35 -- accel/accel.sh@20 -- # read -r var val 00:05:28.743 06:45:35 -- accel/accel.sh@21 -- # val= 00:05:28.743 06:45:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.743 06:45:35 -- accel/accel.sh@20 -- # IFS=: 00:05:28.743 06:45:35 -- accel/accel.sh@20 -- # read -r var val 00:05:28.743 06:45:35 -- accel/accel.sh@21 -- # val= 00:05:28.743 06:45:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.743 06:45:35 -- accel/accel.sh@20 -- # IFS=: 00:05:28.743 06:45:35 -- accel/accel.sh@20 -- # read -r var val 00:05:28.743 06:45:35 -- accel/accel.sh@21 -- # val= 00:05:28.743 06:45:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.743 06:45:35 -- accel/accel.sh@20 -- # IFS=: 00:05:28.743 06:45:35 -- accel/accel.sh@20 -- # read -r var val 00:05:28.743 06:45:35 -- accel/accel.sh@21 -- # val= 00:05:28.743 06:45:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.743 06:45:35 -- accel/accel.sh@20 -- # IFS=: 00:05:28.743 06:45:35 -- accel/accel.sh@20 -- # read -r var val 00:05:28.743 06:45:35 -- accel/accel.sh@21 -- # val= 00:05:28.743 06:45:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.743 06:45:35 -- accel/accel.sh@20 -- # IFS=: 00:05:28.743 06:45:35 -- accel/accel.sh@20 -- # read -r var val 00:05:28.743 06:45:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:28.743 06:45:35 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:28.743 06:45:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.743 00:05:28.743 real 0m2.969s 00:05:28.743 user 0m2.665s 00:05:28.743 sys 0m0.293s 00:05:28.743 06:45:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.743 06:45:35 -- common/autotest_common.sh@10 -- # set +x 00:05:28.743 ************************************ 00:05:28.743 END TEST accel_dualcast 00:05:28.743 ************************************ 00:05:28.743 06:45:35 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:28.743 06:45:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:28.743 06:45:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.743 06:45:35 -- common/autotest_common.sh@10 -- # set +x 00:05:28.743 ************************************ 00:05:28.743 START TEST accel_compare 00:05:28.743 ************************************ 00:05:28.743 06:45:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:05:28.743 06:45:35 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.743 06:45:35 -- accel/accel.sh@17 -- # local accel_module 00:05:28.743 06:45:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:28.743 06:45:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:28.743 06:45:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.743 06:45:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:28.743 06:45:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.743 06:45:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.743 06:45:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:28.743 06:45:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:28.743 06:45:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:28.743 06:45:35 -- accel/accel.sh@42 -- # jq -r . 00:05:28.743 [2024-05-12 06:45:35.597665] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:28.743 [2024-05-12 06:45:35.597758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919904 ] 00:05:28.743 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.743 [2024-05-12 06:45:35.659300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.743 [2024-05-12 06:45:35.779703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.160 06:45:37 -- accel/accel.sh@18 -- # out=' 00:05:30.160 SPDK Configuration: 00:05:30.160 Core mask: 0x1 00:05:30.160 00:05:30.160 Accel Perf Configuration: 00:05:30.160 Workload Type: compare 00:05:30.160 Transfer size: 4096 bytes 00:05:30.160 Vector count 1 00:05:30.160 Module: software 00:05:30.160 Queue depth: 32 00:05:30.160 Allocate depth: 32 00:05:30.160 # threads/core: 1 00:05:30.160 Run time: 1 seconds 00:05:30.160 Verify: Yes 00:05:30.160 00:05:30.160 Running for 1 seconds... 00:05:30.160 00:05:30.160 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:30.160 ------------------------------------------------------------------------------------ 00:05:30.160 0,0 397888/s 1554 MiB/s 0 0 00:05:30.160 ==================================================================================== 00:05:30.160 Total 397888/s 1554 MiB/s 0 0' 00:05:30.160 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.160 06:45:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:30.160 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.160 06:45:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:30.160 06:45:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.160 06:45:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:30.160 06:45:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.160 06:45:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.160 06:45:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:30.160 06:45:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:30.160 06:45:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:30.160 06:45:37 -- accel/accel.sh@42 -- # jq -r . 00:05:30.160 [2024-05-12 06:45:37.079578] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:30.160 [2024-05-12 06:45:37.079659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920061 ] 00:05:30.160 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.160 [2024-05-12 06:45:37.145277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.160 [2024-05-12 06:45:37.266052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val= 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val= 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val=0x1 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val= 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val= 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val=compare 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val= 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val=software 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@23 -- # accel_module=software 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val=32 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val=32 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val=1 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val=Yes 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val= 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:30.419 06:45:37 -- accel/accel.sh@21 -- # val= 00:05:30.419 06:45:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # IFS=: 00:05:30.419 06:45:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.800 06:45:38 -- accel/accel.sh@21 -- # val= 00:05:31.800 06:45:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.800 06:45:38 -- accel/accel.sh@20 -- # IFS=: 00:05:31.800 06:45:38 -- accel/accel.sh@20 -- # read -r var val 00:05:31.800 06:45:38 -- accel/accel.sh@21 -- # val= 00:05:31.800 06:45:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.800 06:45:38 -- accel/accel.sh@20 -- # IFS=: 00:05:31.800 06:45:38 -- accel/accel.sh@20 -- # read -r var val 00:05:31.800 06:45:38 -- accel/accel.sh@21 -- # val= 00:05:31.800 06:45:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.800 06:45:38 -- accel/accel.sh@20 -- # IFS=: 00:05:31.800 06:45:38 -- accel/accel.sh@20 -- # read -r var val 00:05:31.800 06:45:38 -- accel/accel.sh@21 -- # val= 00:05:31.800 06:45:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.800 06:45:38 -- accel/accel.sh@20 -- # IFS=: 00:05:31.800 06:45:38 -- accel/accel.sh@20 -- # read -r var val 00:05:31.800 06:45:38 -- accel/accel.sh@21 -- # val= 00:05:31.800 06:45:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.800 06:45:38 -- accel/accel.sh@20 -- # IFS=: 00:05:31.800 06:45:38 -- accel/accel.sh@20 -- # read -r var val 00:05:31.800 06:45:38 -- accel/accel.sh@21 -- # val= 00:05:31.800 06:45:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.800 06:45:38 -- accel/accel.sh@20 -- # IFS=: 00:05:31.800 06:45:38 -- accel/accel.sh@20 -- # read -r var val 00:05:31.800 06:45:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:31.800 06:45:38 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:31.800 06:45:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.800 00:05:31.800 real 0m2.968s 00:05:31.800 user 0m2.660s 00:05:31.800 sys 0m0.299s 00:05:31.800 06:45:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.800 06:45:38 -- common/autotest_common.sh@10 -- # set +x 00:05:31.800 ************************************ 00:05:31.800 END TEST accel_compare 00:05:31.800 ************************************ 00:05:31.800 06:45:38 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:31.800 06:45:38 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:31.800 06:45:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.800 06:45:38 -- common/autotest_common.sh@10 -- # set +x 00:05:31.800 ************************************ 00:05:31.800 START TEST accel_xor 00:05:31.800 ************************************ 00:05:31.800 06:45:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:05:31.800 06:45:38 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.800 06:45:38 -- accel/accel.sh@17 -- # local accel_module 00:05:31.800 06:45:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:31.800 06:45:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:31.800 06:45:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.800 06:45:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:31.800 06:45:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.800 06:45:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.800 06:45:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:31.800 06:45:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:31.800 06:45:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:31.800 06:45:38 -- accel/accel.sh@42 -- # jq -r . 00:05:31.800 [2024-05-12 06:45:38.589214] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:31.800 [2024-05-12 06:45:38.589293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920327 ] 00:05:31.800 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.800 [2024-05-12 06:45:38.651853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.800 [2024-05-12 06:45:38.770916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.180 06:45:40 -- accel/accel.sh@18 -- # out=' 00:05:33.180 SPDK Configuration: 00:05:33.180 Core mask: 0x1 00:05:33.180 00:05:33.180 Accel Perf Configuration: 00:05:33.180 Workload Type: xor 00:05:33.180 Source buffers: 2 00:05:33.180 Transfer size: 4096 bytes 00:05:33.180 Vector count 1 00:05:33.180 Module: software 00:05:33.180 Queue depth: 32 00:05:33.180 Allocate depth: 32 00:05:33.180 # threads/core: 1 00:05:33.180 Run time: 1 seconds 00:05:33.180 Verify: Yes 00:05:33.180 00:05:33.180 Running for 1 seconds... 00:05:33.180 00:05:33.180 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:33.180 ------------------------------------------------------------------------------------ 00:05:33.180 0,0 192480/s 751 MiB/s 0 0 00:05:33.180 ==================================================================================== 00:05:33.181 Total 192480/s 751 MiB/s 0 0' 00:05:33.181 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.181 06:45:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:33.181 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.181 06:45:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:33.181 06:45:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.181 06:45:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.181 06:45:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.181 06:45:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.181 06:45:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.181 06:45:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.181 06:45:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.181 06:45:40 -- accel/accel.sh@42 -- # jq -r . 00:05:33.181 [2024-05-12 06:45:40.073164] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:33.181 [2024-05-12 06:45:40.073244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920473 ] 00:05:33.181 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.181 [2024-05-12 06:45:40.139429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.181 [2024-05-12 06:45:40.260668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val= 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val= 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val=0x1 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val= 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val= 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val=xor 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val=2 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val= 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val=software 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@23 -- # accel_module=software 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val=32 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val=32 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val=1 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val=Yes 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val= 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:33.440 06:45:40 -- accel/accel.sh@21 -- # val= 00:05:33.440 06:45:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # IFS=: 00:05:33.440 06:45:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.819 06:45:41 -- accel/accel.sh@21 -- # val= 00:05:34.819 06:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.819 06:45:41 -- accel/accel.sh@20 -- # IFS=: 00:05:34.819 06:45:41 -- accel/accel.sh@20 -- # read -r var val 00:05:34.819 06:45:41 -- accel/accel.sh@21 -- # val= 00:05:34.819 06:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.819 06:45:41 -- accel/accel.sh@20 -- # IFS=: 00:05:34.819 06:45:41 -- accel/accel.sh@20 -- # read -r var val 00:05:34.819 06:45:41 -- accel/accel.sh@21 -- # val= 00:05:34.819 06:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.819 06:45:41 -- accel/accel.sh@20 -- # IFS=: 00:05:34.819 06:45:41 -- accel/accel.sh@20 -- # read -r var val 00:05:34.819 06:45:41 -- accel/accel.sh@21 -- # val= 00:05:34.819 06:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.819 06:45:41 -- accel/accel.sh@20 -- # IFS=: 00:05:34.819 06:45:41 -- accel/accel.sh@20 -- # read -r var val 00:05:34.819 06:45:41 -- accel/accel.sh@21 -- # val= 00:05:34.819 06:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.820 06:45:41 -- accel/accel.sh@20 -- # IFS=: 00:05:34.820 06:45:41 -- accel/accel.sh@20 -- # read -r var val 00:05:34.820 06:45:41 -- accel/accel.sh@21 -- # val= 00:05:34.820 06:45:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.820 06:45:41 -- accel/accel.sh@20 -- # IFS=: 00:05:34.820 06:45:41 -- accel/accel.sh@20 -- # read -r var val 00:05:34.820 06:45:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:34.820 06:45:41 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:34.820 06:45:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.820 00:05:34.820 real 0m2.975s 00:05:34.820 user 0m2.681s 00:05:34.820 sys 0m0.285s 00:05:34.820 06:45:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.820 06:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.820 ************************************ 00:05:34.820 END TEST accel_xor 00:05:34.820 ************************************ 00:05:34.820 06:45:41 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:34.820 06:45:41 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:34.820 06:45:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.820 06:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.820 ************************************ 00:05:34.820 START TEST accel_xor 00:05:34.820 ************************************ 00:05:34.820 06:45:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:05:34.820 06:45:41 -- accel/accel.sh@16 -- # local accel_opc 00:05:34.820 06:45:41 -- accel/accel.sh@17 -- # local accel_module 00:05:34.820 06:45:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:34.820 06:45:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:34.820 06:45:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.820 06:45:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.820 06:45:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.820 06:45:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.820 06:45:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.820 06:45:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.820 06:45:41 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.820 06:45:41 -- accel/accel.sh@42 -- # jq -r . 00:05:34.820 [2024-05-12 06:45:41.589310] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:34.820 [2024-05-12 06:45:41.589390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920644 ] 00:05:34.820 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.820 [2024-05-12 06:45:41.656575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.820 [2024-05-12 06:45:41.776103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.200 06:45:43 -- accel/accel.sh@18 -- # out=' 00:05:36.200 SPDK Configuration: 00:05:36.200 Core mask: 0x1 00:05:36.200 00:05:36.200 Accel Perf Configuration: 00:05:36.200 Workload Type: xor 00:05:36.200 Source buffers: 3 00:05:36.200 Transfer size: 4096 bytes 00:05:36.200 Vector count 1 00:05:36.200 Module: software 00:05:36.200 Queue depth: 32 00:05:36.200 Allocate depth: 32 00:05:36.200 # threads/core: 1 00:05:36.200 Run time: 1 seconds 00:05:36.200 Verify: Yes 00:05:36.200 00:05:36.200 Running for 1 seconds... 00:05:36.200 00:05:36.200 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:36.200 ------------------------------------------------------------------------------------ 00:05:36.200 0,0 183264/s 715 MiB/s 0 0 00:05:36.200 ==================================================================================== 00:05:36.200 Total 183264/s 715 MiB/s 0 0' 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:36.200 06:45:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.200 06:45:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.200 06:45:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.200 06:45:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.200 06:45:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.200 06:45:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.200 06:45:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.200 06:45:43 -- accel/accel.sh@42 -- # jq -r . 00:05:36.200 [2024-05-12 06:45:43.078836] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:36.200 [2024-05-12 06:45:43.078917] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920891 ] 00:05:36.200 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.200 [2024-05-12 06:45:43.140352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.200 [2024-05-12 06:45:43.254401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val= 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val= 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val=0x1 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val= 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val= 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val=xor 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val=3 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val= 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val=software 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@23 -- # accel_module=software 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val=32 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val=32 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val=1 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val=Yes 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val= 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:36.200 06:45:43 -- accel/accel.sh@21 -- # val= 00:05:36.200 06:45:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # IFS=: 00:05:36.200 06:45:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.578 06:45:44 -- accel/accel.sh@21 -- # val= 00:05:37.578 06:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.578 06:45:44 -- accel/accel.sh@20 -- # IFS=: 00:05:37.578 06:45:44 -- accel/accel.sh@20 -- # read -r var val 00:05:37.578 06:45:44 -- accel/accel.sh@21 -- # val= 00:05:37.578 06:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.578 06:45:44 -- accel/accel.sh@20 -- # IFS=: 00:05:37.578 06:45:44 -- accel/accel.sh@20 -- # read -r var val 00:05:37.578 06:45:44 -- accel/accel.sh@21 -- # val= 00:05:37.578 06:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.578 06:45:44 -- accel/accel.sh@20 -- # IFS=: 00:05:37.578 06:45:44 -- accel/accel.sh@20 -- # read -r var val 00:05:37.578 06:45:44 -- accel/accel.sh@21 -- # val= 00:05:37.578 06:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.578 06:45:44 -- accel/accel.sh@20 -- # IFS=: 00:05:37.578 06:45:44 -- accel/accel.sh@20 -- # read -r var val 00:05:37.578 06:45:44 -- accel/accel.sh@21 -- # val= 00:05:37.579 06:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.579 06:45:44 -- accel/accel.sh@20 -- # IFS=: 00:05:37.579 06:45:44 -- accel/accel.sh@20 -- # read -r var val 00:05:37.579 06:45:44 -- accel/accel.sh@21 -- # val= 00:05:37.579 06:45:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.579 06:45:44 -- accel/accel.sh@20 -- # IFS=: 00:05:37.579 06:45:44 -- accel/accel.sh@20 -- # read -r var val 00:05:37.579 06:45:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:37.579 06:45:44 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:37.579 06:45:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.579 00:05:37.579 real 0m2.960s 00:05:37.579 user 0m2.674s 00:05:37.579 sys 0m0.277s 00:05:37.579 06:45:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.579 06:45:44 -- common/autotest_common.sh@10 -- # set +x 00:05:37.579 ************************************ 00:05:37.579 END TEST accel_xor 00:05:37.579 ************************************ 00:05:37.579 06:45:44 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:37.579 06:45:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:37.579 06:45:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.579 06:45:44 -- common/autotest_common.sh@10 -- # set +x 00:05:37.579 ************************************ 00:05:37.579 START TEST accel_dif_verify 00:05:37.579 ************************************ 00:05:37.579 06:45:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:05:37.579 06:45:44 -- accel/accel.sh@16 -- # local accel_opc 00:05:37.579 06:45:44 -- accel/accel.sh@17 -- # local accel_module 00:05:37.579 06:45:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:37.579 06:45:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:37.579 06:45:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.579 06:45:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:37.579 06:45:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.579 06:45:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.579 06:45:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:37.579 06:45:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:37.579 06:45:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:37.579 06:45:44 -- accel/accel.sh@42 -- # jq -r . 00:05:37.579 [2024-05-12 06:45:44.573254] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:37.579 [2024-05-12 06:45:44.573335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921062 ] 00:05:37.579 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.579 [2024-05-12 06:45:44.638008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.839 [2024-05-12 06:45:44.758706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.219 06:45:46 -- accel/accel.sh@18 -- # out=' 00:05:39.219 SPDK Configuration: 00:05:39.219 Core mask: 0x1 00:05:39.219 00:05:39.219 Accel Perf Configuration: 00:05:39.219 Workload Type: dif_verify 00:05:39.219 Vector size: 4096 bytes 00:05:39.219 Transfer size: 4096 bytes 00:05:39.219 Block size: 512 bytes 00:05:39.219 Metadata size: 8 bytes 00:05:39.219 Vector count 1 00:05:39.219 Module: software 00:05:39.219 Queue depth: 32 00:05:39.219 Allocate depth: 32 00:05:39.219 # threads/core: 1 00:05:39.219 Run time: 1 seconds 00:05:39.219 Verify: No 00:05:39.219 00:05:39.219 Running for 1 seconds... 00:05:39.219 00:05:39.219 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:39.219 ------------------------------------------------------------------------------------ 00:05:39.219 0,0 81952/s 325 MiB/s 0 0 00:05:39.219 ==================================================================================== 00:05:39.219 Total 81952/s 320 MiB/s 0 0' 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.219 06:45:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.219 06:45:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:39.219 06:45:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.219 06:45:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.219 06:45:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.219 06:45:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.219 06:45:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.219 06:45:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.219 06:45:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.219 06:45:46 -- accel/accel.sh@42 -- # jq -r . 00:05:39.219 [2024-05-12 06:45:46.061057] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:39.219 [2024-05-12 06:45:46.061137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921200 ] 00:05:39.219 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.219 [2024-05-12 06:45:46.126273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.219 [2024-05-12 06:45:46.247446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.219 06:45:46 -- accel/accel.sh@21 -- # val= 00:05:39.219 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.219 06:45:46 -- accel/accel.sh@21 -- # val= 00:05:39.219 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.219 06:45:46 -- accel/accel.sh@21 -- # val=0x1 00:05:39.219 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.219 06:45:46 -- accel/accel.sh@21 -- # val= 00:05:39.219 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.219 06:45:46 -- accel/accel.sh@21 -- # val= 00:05:39.219 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.219 06:45:46 -- accel/accel.sh@21 -- # val=dif_verify 00:05:39.219 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.219 06:45:46 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.219 06:45:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:39.219 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.219 06:45:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:39.219 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.219 06:45:46 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:39.219 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.219 06:45:46 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:39.219 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.219 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.220 06:45:46 -- accel/accel.sh@21 -- # val= 00:05:39.220 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.220 06:45:46 -- accel/accel.sh@21 -- # val=software 00:05:39.220 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.220 06:45:46 -- accel/accel.sh@23 -- # accel_module=software 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.220 06:45:46 -- accel/accel.sh@21 -- # val=32 00:05:39.220 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.220 06:45:46 -- accel/accel.sh@21 -- # val=32 00:05:39.220 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.220 06:45:46 -- accel/accel.sh@21 -- # val=1 00:05:39.220 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.220 06:45:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:39.220 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.220 06:45:46 -- accel/accel.sh@21 -- # val=No 00:05:39.220 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.220 06:45:46 -- accel/accel.sh@21 -- # val= 00:05:39.220 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:39.220 06:45:46 -- accel/accel.sh@21 -- # val= 00:05:39.220 06:45:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # IFS=: 00:05:39.220 06:45:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.598 06:45:47 -- accel/accel.sh@21 -- # val= 00:05:40.598 06:45:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.598 06:45:47 -- accel/accel.sh@20 -- # IFS=: 00:05:40.598 06:45:47 -- accel/accel.sh@20 -- # read -r var val 00:05:40.598 06:45:47 -- accel/accel.sh@21 -- # val= 00:05:40.598 06:45:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.598 06:45:47 -- accel/accel.sh@20 -- # IFS=: 00:05:40.598 06:45:47 -- accel/accel.sh@20 -- # read -r var val 00:05:40.598 06:45:47 -- accel/accel.sh@21 -- # val= 00:05:40.598 06:45:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.598 06:45:47 -- accel/accel.sh@20 -- # IFS=: 00:05:40.598 06:45:47 -- accel/accel.sh@20 -- # read -r var val 00:05:40.598 06:45:47 -- accel/accel.sh@21 -- # val= 00:05:40.598 06:45:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.598 06:45:47 -- accel/accel.sh@20 -- # IFS=: 00:05:40.598 06:45:47 -- accel/accel.sh@20 -- # read -r var val 00:05:40.598 06:45:47 -- accel/accel.sh@21 -- # val= 00:05:40.598 06:45:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.598 06:45:47 -- accel/accel.sh@20 -- # IFS=: 00:05:40.598 06:45:47 -- accel/accel.sh@20 -- # read -r var val 00:05:40.598 06:45:47 -- accel/accel.sh@21 -- # val= 00:05:40.598 06:45:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.598 06:45:47 -- accel/accel.sh@20 -- # IFS=: 00:05:40.598 06:45:47 -- accel/accel.sh@20 -- # read -r var val 00:05:40.598 06:45:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:40.598 06:45:47 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:40.598 06:45:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.598 00:05:40.598 real 0m2.975s 00:05:40.598 user 0m2.664s 00:05:40.598 sys 0m0.304s 00:05:40.598 06:45:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.598 06:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:40.598 ************************************ 00:05:40.598 END TEST accel_dif_verify 00:05:40.598 ************************************ 00:05:40.598 06:45:47 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:40.598 06:45:47 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:40.598 06:45:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.598 06:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:40.598 ************************************ 00:05:40.598 START TEST accel_dif_generate 00:05:40.598 ************************************ 00:05:40.599 06:45:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:05:40.599 06:45:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.599 06:45:47 -- accel/accel.sh@17 -- # local accel_module 00:05:40.599 06:45:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:40.599 06:45:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:40.599 06:45:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.599 06:45:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.599 06:45:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.599 06:45:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.599 06:45:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.599 06:45:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.599 06:45:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.599 06:45:47 -- accel/accel.sh@42 -- # jq -r . 00:05:40.599 [2024-05-12 06:45:47.572357] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:40.599 [2024-05-12 06:45:47.572441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921474 ] 00:05:40.599 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.599 [2024-05-12 06:45:47.636288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.859 [2024-05-12 06:45:47.760362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.237 06:45:49 -- accel/accel.sh@18 -- # out=' 00:05:42.237 SPDK Configuration: 00:05:42.237 Core mask: 0x1 00:05:42.237 00:05:42.237 Accel Perf Configuration: 00:05:42.237 Workload Type: dif_generate 00:05:42.237 Vector size: 4096 bytes 00:05:42.237 Transfer size: 4096 bytes 00:05:42.237 Block size: 512 bytes 00:05:42.237 Metadata size: 8 bytes 00:05:42.237 Vector count 1 00:05:42.237 Module: software 00:05:42.237 Queue depth: 32 00:05:42.237 Allocate depth: 32 00:05:42.237 # threads/core: 1 00:05:42.237 Run time: 1 seconds 00:05:42.237 Verify: No 00:05:42.237 00:05:42.237 Running for 1 seconds... 00:05:42.237 00:05:42.237 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:42.237 ------------------------------------------------------------------------------------ 00:05:42.237 0,0 96384/s 382 MiB/s 0 0 00:05:42.237 ==================================================================================== 00:05:42.237 Total 96384/s 376 MiB/s 0 0' 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.237 06:45:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.237 06:45:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:42.237 06:45:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.237 06:45:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.237 06:45:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.237 06:45:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.237 06:45:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.237 06:45:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.237 06:45:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.237 06:45:49 -- accel/accel.sh@42 -- # jq -r . 00:05:42.237 [2024-05-12 06:45:49.042518] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:42.237 [2024-05-12 06:45:49.042600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921628 ] 00:05:42.237 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.237 [2024-05-12 06:45:49.104147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.237 [2024-05-12 06:45:49.224022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.237 06:45:49 -- accel/accel.sh@21 -- # val= 00:05:42.237 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.237 06:45:49 -- accel/accel.sh@21 -- # val= 00:05:42.237 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.237 06:45:49 -- accel/accel.sh@21 -- # val=0x1 00:05:42.237 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.237 06:45:49 -- accel/accel.sh@21 -- # val= 00:05:42.237 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.237 06:45:49 -- accel/accel.sh@21 -- # val= 00:05:42.237 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.237 06:45:49 -- accel/accel.sh@21 -- # val=dif_generate 00:05:42.237 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.237 06:45:49 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.237 06:45:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:42.237 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.237 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.237 06:45:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:42.237 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.238 06:45:49 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:42.238 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.238 06:45:49 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:42.238 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.238 06:45:49 -- accel/accel.sh@21 -- # val= 00:05:42.238 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.238 06:45:49 -- accel/accel.sh@21 -- # val=software 00:05:42.238 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.238 06:45:49 -- accel/accel.sh@23 -- # accel_module=software 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.238 06:45:49 -- accel/accel.sh@21 -- # val=32 00:05:42.238 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.238 06:45:49 -- accel/accel.sh@21 -- # val=32 00:05:42.238 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.238 06:45:49 -- accel/accel.sh@21 -- # val=1 00:05:42.238 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.238 06:45:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:42.238 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.238 06:45:49 -- accel/accel.sh@21 -- # val=No 00:05:42.238 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.238 06:45:49 -- accel/accel.sh@21 -- # val= 00:05:42.238 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:42.238 06:45:49 -- accel/accel.sh@21 -- # val= 00:05:42.238 06:45:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # IFS=: 00:05:42.238 06:45:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.617 06:45:50 -- accel/accel.sh@21 -- # val= 00:05:43.617 06:45:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.617 06:45:50 -- accel/accel.sh@20 -- # IFS=: 00:05:43.617 06:45:50 -- accel/accel.sh@20 -- # read -r var val 00:05:43.617 06:45:50 -- accel/accel.sh@21 -- # val= 00:05:43.617 06:45:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.617 06:45:50 -- accel/accel.sh@20 -- # IFS=: 00:05:43.617 06:45:50 -- accel/accel.sh@20 -- # read -r var val 00:05:43.617 06:45:50 -- accel/accel.sh@21 -- # val= 00:05:43.617 06:45:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.617 06:45:50 -- accel/accel.sh@20 -- # IFS=: 00:05:43.617 06:45:50 -- accel/accel.sh@20 -- # read -r var val 00:05:43.617 06:45:50 -- accel/accel.sh@21 -- # val= 00:05:43.617 06:45:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.617 06:45:50 -- accel/accel.sh@20 -- # IFS=: 00:05:43.617 06:45:50 -- accel/accel.sh@20 -- # read -r var val 00:05:43.617 06:45:50 -- accel/accel.sh@21 -- # val= 00:05:43.617 06:45:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.617 06:45:50 -- accel/accel.sh@20 -- # IFS=: 00:05:43.617 06:45:50 -- accel/accel.sh@20 -- # read -r var val 00:05:43.617 06:45:50 -- accel/accel.sh@21 -- # val= 00:05:43.617 06:45:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.617 06:45:50 -- accel/accel.sh@20 -- # IFS=: 00:05:43.617 06:45:50 -- accel/accel.sh@20 -- # read -r var val 00:05:43.617 06:45:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:43.617 06:45:50 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:05:43.617 06:45:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.617 00:05:43.617 real 0m2.957s 00:05:43.617 user 0m2.671s 00:05:43.617 sys 0m0.279s 00:05:43.617 06:45:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.617 06:45:50 -- common/autotest_common.sh@10 -- # set +x 00:05:43.617 ************************************ 00:05:43.617 END TEST accel_dif_generate 00:05:43.617 ************************************ 00:05:43.617 06:45:50 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:43.617 06:45:50 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:43.617 06:45:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.617 06:45:50 -- common/autotest_common.sh@10 -- # set +x 00:05:43.617 ************************************ 00:05:43.617 START TEST accel_dif_generate_copy 00:05:43.617 ************************************ 00:05:43.617 06:45:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:05:43.617 06:45:50 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.617 06:45:50 -- accel/accel.sh@17 -- # local accel_module 00:05:43.617 06:45:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:05:43.617 06:45:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:43.617 06:45:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.617 06:45:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.617 06:45:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.617 06:45:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.617 06:45:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.617 06:45:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.617 06:45:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.617 06:45:50 -- accel/accel.sh@42 -- # jq -r . 00:05:43.617 [2024-05-12 06:45:50.553931] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:43.617 [2024-05-12 06:45:50.554009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921783 ] 00:05:43.617 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.617 [2024-05-12 06:45:50.616394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.617 [2024-05-12 06:45:50.736966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.997 06:45:52 -- accel/accel.sh@18 -- # out=' 00:05:44.997 SPDK Configuration: 00:05:44.997 Core mask: 0x1 00:05:44.997 00:05:44.997 Accel Perf Configuration: 00:05:44.997 Workload Type: dif_generate_copy 00:05:44.997 Vector size: 4096 bytes 00:05:44.997 Transfer size: 4096 bytes 00:05:44.997 Vector count 1 00:05:44.997 Module: software 00:05:44.997 Queue depth: 32 00:05:44.997 Allocate depth: 32 00:05:44.997 # threads/core: 1 00:05:44.997 Run time: 1 seconds 00:05:44.997 Verify: No 00:05:44.997 00:05:44.997 Running for 1 seconds... 00:05:44.997 00:05:44.997 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:44.997 ------------------------------------------------------------------------------------ 00:05:44.997 0,0 76256/s 302 MiB/s 0 0 00:05:44.997 ==================================================================================== 00:05:44.997 Total 76256/s 297 MiB/s 0 0' 00:05:44.997 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:44.997 06:45:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:44.997 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:44.997 06:45:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:44.997 06:45:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.997 06:45:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.997 06:45:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.997 06:45:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.997 06:45:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.997 06:45:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.997 06:45:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.997 06:45:52 -- accel/accel.sh@42 -- # jq -r . 00:05:44.997 [2024-05-12 06:45:52.028841] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:44.997 [2024-05-12 06:45:52.028921] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922047 ] 00:05:44.997 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.997 [2024-05-12 06:45:52.090658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.255 [2024-05-12 06:45:52.211034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.255 06:45:52 -- accel/accel.sh@21 -- # val= 00:05:45.255 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.255 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.255 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.255 06:45:52 -- accel/accel.sh@21 -- # val= 00:05:45.255 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.255 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.255 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.255 06:45:52 -- accel/accel.sh@21 -- # val=0x1 00:05:45.255 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.255 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.255 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.255 06:45:52 -- accel/accel.sh@21 -- # val= 00:05:45.255 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.255 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.255 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.255 06:45:52 -- accel/accel.sh@21 -- # val= 00:05:45.255 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.255 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.255 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.255 06:45:52 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:05:45.255 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.255 06:45:52 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:05:45.255 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.255 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.255 06:45:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:45.255 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.256 06:45:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:45.256 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.256 06:45:52 -- accel/accel.sh@21 -- # val= 00:05:45.256 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.256 06:45:52 -- accel/accel.sh@21 -- # val=software 00:05:45.256 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.256 06:45:52 -- accel/accel.sh@23 -- # accel_module=software 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.256 06:45:52 -- accel/accel.sh@21 -- # val=32 00:05:45.256 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.256 06:45:52 -- accel/accel.sh@21 -- # val=32 00:05:45.256 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.256 06:45:52 -- accel/accel.sh@21 -- # val=1 00:05:45.256 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.256 06:45:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:45.256 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.256 06:45:52 -- accel/accel.sh@21 -- # val=No 00:05:45.256 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.256 06:45:52 -- accel/accel.sh@21 -- # val= 00:05:45.256 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:45.256 06:45:52 -- accel/accel.sh@21 -- # val= 00:05:45.256 06:45:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # IFS=: 00:05:45.256 06:45:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.634 06:45:53 -- accel/accel.sh@21 -- # val= 00:05:46.634 06:45:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.634 06:45:53 -- accel/accel.sh@20 -- # IFS=: 00:05:46.634 06:45:53 -- accel/accel.sh@20 -- # read -r var val 00:05:46.634 06:45:53 -- accel/accel.sh@21 -- # val= 00:05:46.634 06:45:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.634 06:45:53 -- accel/accel.sh@20 -- # IFS=: 00:05:46.634 06:45:53 -- accel/accel.sh@20 -- # read -r var val 00:05:46.634 06:45:53 -- accel/accel.sh@21 -- # val= 00:05:46.634 06:45:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.634 06:45:53 -- accel/accel.sh@20 -- # IFS=: 00:05:46.634 06:45:53 -- accel/accel.sh@20 -- # read -r var val 00:05:46.634 06:45:53 -- accel/accel.sh@21 -- # val= 00:05:46.634 06:45:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.634 06:45:53 -- accel/accel.sh@20 -- # IFS=: 00:05:46.634 06:45:53 -- accel/accel.sh@20 -- # read -r var val 00:05:46.634 06:45:53 -- accel/accel.sh@21 -- # val= 00:05:46.634 06:45:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.634 06:45:53 -- accel/accel.sh@20 -- # IFS=: 00:05:46.634 06:45:53 -- accel/accel.sh@20 -- # read -r var val 00:05:46.634 06:45:53 -- accel/accel.sh@21 -- # val= 00:05:46.634 06:45:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.634 06:45:53 -- accel/accel.sh@20 -- # IFS=: 00:05:46.634 06:45:53 -- accel/accel.sh@20 -- # read -r var val 00:05:46.634 06:45:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:46.634 06:45:53 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:05:46.634 06:45:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.634 00:05:46.634 real 0m2.959s 00:05:46.634 user 0m2.661s 00:05:46.634 sys 0m0.287s 00:05:46.634 06:45:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.634 06:45:53 -- common/autotest_common.sh@10 -- # set +x 00:05:46.634 ************************************ 00:05:46.634 END TEST accel_dif_generate_copy 00:05:46.634 ************************************ 00:05:46.634 06:45:53 -- accel/accel.sh@107 -- # [[ y == y ]] 00:05:46.634 06:45:53 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.634 06:45:53 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:46.634 06:45:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.634 06:45:53 -- common/autotest_common.sh@10 -- # set +x 00:05:46.634 ************************************ 00:05:46.634 START TEST accel_comp 00:05:46.634 ************************************ 00:05:46.634 06:45:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.634 06:45:53 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.634 06:45:53 -- accel/accel.sh@17 -- # local accel_module 00:05:46.634 06:45:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.634 06:45:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.634 06:45:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.634 06:45:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.634 06:45:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.634 06:45:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.634 06:45:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.634 06:45:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.634 06:45:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.634 06:45:53 -- accel/accel.sh@42 -- # jq -r . 00:05:46.634 [2024-05-12 06:45:53.534016] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:46.634 [2024-05-12 06:45:53.534101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922210 ] 00:05:46.634 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.634 [2024-05-12 06:45:53.595811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.634 [2024-05-12 06:45:53.715521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.039 06:45:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:48.039 00:05:48.039 SPDK Configuration: 00:05:48.039 Core mask: 0x1 00:05:48.039 00:05:48.039 Accel Perf Configuration: 00:05:48.039 Workload Type: compress 00:05:48.039 Transfer size: 4096 bytes 00:05:48.039 Vector count 1 00:05:48.039 Module: software 00:05:48.039 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.039 Queue depth: 32 00:05:48.039 Allocate depth: 32 00:05:48.039 # threads/core: 1 00:05:48.039 Run time: 1 seconds 00:05:48.039 Verify: No 00:05:48.039 00:05:48.039 Running for 1 seconds... 00:05:48.039 00:05:48.039 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:48.039 ------------------------------------------------------------------------------------ 00:05:48.039 0,0 32416/s 135 MiB/s 0 0 00:05:48.039 ==================================================================================== 00:05:48.039 Total 32416/s 126 MiB/s 0 0' 00:05:48.039 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.039 06:45:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.039 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.039 06:45:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.039 06:45:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.039 06:45:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.039 06:45:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.039 06:45:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.039 06:45:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.039 06:45:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.039 06:45:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.039 06:45:55 -- accel/accel.sh@42 -- # jq -r . 00:05:48.039 [2024-05-12 06:45:55.022106] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:48.039 [2024-05-12 06:45:55.022186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922356 ] 00:05:48.039 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.039 [2024-05-12 06:45:55.082941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.302 [2024-05-12 06:45:55.202399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val= 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val= 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val= 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val=0x1 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val= 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val= 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val=compress 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@24 -- # accel_opc=compress 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val= 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val=software 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@23 -- # accel_module=software 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val=32 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val=32 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val=1 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.302 06:45:55 -- accel/accel.sh@21 -- # val=No 00:05:48.302 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.302 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.303 06:45:55 -- accel/accel.sh@21 -- # val= 00:05:48.303 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.303 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.303 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:48.303 06:45:55 -- accel/accel.sh@21 -- # val= 00:05:48.303 06:45:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.303 06:45:55 -- accel/accel.sh@20 -- # IFS=: 00:05:48.303 06:45:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.681 06:45:56 -- accel/accel.sh@21 -- # val= 00:05:49.681 06:45:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.681 06:45:56 -- accel/accel.sh@20 -- # IFS=: 00:05:49.681 06:45:56 -- accel/accel.sh@20 -- # read -r var val 00:05:49.681 06:45:56 -- accel/accel.sh@21 -- # val= 00:05:49.681 06:45:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.681 06:45:56 -- accel/accel.sh@20 -- # IFS=: 00:05:49.681 06:45:56 -- accel/accel.sh@20 -- # read -r var val 00:05:49.681 06:45:56 -- accel/accel.sh@21 -- # val= 00:05:49.681 06:45:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.681 06:45:56 -- accel/accel.sh@20 -- # IFS=: 00:05:49.681 06:45:56 -- accel/accel.sh@20 -- # read -r var val 00:05:49.681 06:45:56 -- accel/accel.sh@21 -- # val= 00:05:49.681 06:45:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.681 06:45:56 -- accel/accel.sh@20 -- # IFS=: 00:05:49.681 06:45:56 -- accel/accel.sh@20 -- # read -r var val 00:05:49.681 06:45:56 -- accel/accel.sh@21 -- # val= 00:05:49.681 06:45:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.681 06:45:56 -- accel/accel.sh@20 -- # IFS=: 00:05:49.681 06:45:56 -- accel/accel.sh@20 -- # read -r var val 00:05:49.681 06:45:56 -- accel/accel.sh@21 -- # val= 00:05:49.681 06:45:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.681 06:45:56 -- accel/accel.sh@20 -- # IFS=: 00:05:49.681 06:45:56 -- accel/accel.sh@20 -- # read -r var val 00:05:49.681 06:45:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:49.681 06:45:56 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:05:49.681 06:45:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.681 00:05:49.681 real 0m2.970s 00:05:49.681 user 0m2.678s 00:05:49.681 sys 0m0.284s 00:05:49.681 06:45:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.681 06:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:49.681 ************************************ 00:05:49.681 END TEST accel_comp 00:05:49.681 ************************************ 00:05:49.681 06:45:56 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.681 06:45:56 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:49.681 06:45:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.681 06:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:49.681 ************************************ 00:05:49.681 START TEST accel_decomp 00:05:49.681 ************************************ 00:05:49.681 06:45:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.681 06:45:56 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.681 06:45:56 -- accel/accel.sh@17 -- # local accel_module 00:05:49.681 06:45:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.681 06:45:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.681 06:45:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.681 06:45:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.681 06:45:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.681 06:45:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.681 06:45:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.681 06:45:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.681 06:45:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.681 06:45:56 -- accel/accel.sh@42 -- # jq -r . 00:05:49.681 [2024-05-12 06:45:56.528473] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:49.681 [2024-05-12 06:45:56.528551] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922632 ] 00:05:49.681 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.681 [2024-05-12 06:45:56.589347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.681 [2024-05-12 06:45:56.710107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.062 06:45:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:51.062 00:05:51.062 SPDK Configuration: 00:05:51.062 Core mask: 0x1 00:05:51.062 00:05:51.062 Accel Perf Configuration: 00:05:51.062 Workload Type: decompress 00:05:51.062 Transfer size: 4096 bytes 00:05:51.062 Vector count 1 00:05:51.062 Module: software 00:05:51.062 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.062 Queue depth: 32 00:05:51.063 Allocate depth: 32 00:05:51.063 # threads/core: 1 00:05:51.063 Run time: 1 seconds 00:05:51.063 Verify: Yes 00:05:51.063 00:05:51.063 Running for 1 seconds... 00:05:51.063 00:05:51.063 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:51.063 ------------------------------------------------------------------------------------ 00:05:51.063 0,0 55488/s 102 MiB/s 0 0 00:05:51.063 ==================================================================================== 00:05:51.063 Total 55488/s 216 MiB/s 0 0' 00:05:51.063 06:45:57 -- accel/accel.sh@20 -- # IFS=: 00:05:51.063 06:45:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.063 06:45:57 -- accel/accel.sh@20 -- # read -r var val 00:05:51.063 06:45:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.063 06:45:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.063 06:45:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.063 06:45:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.063 06:45:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.063 06:45:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.063 06:45:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.063 06:45:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.063 06:45:57 -- accel/accel.sh@42 -- # jq -r . 00:05:51.063 [2024-05-12 06:45:58.013081] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:51.063 [2024-05-12 06:45:58.013161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922778 ] 00:05:51.063 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.063 [2024-05-12 06:45:58.081146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.323 [2024-05-12 06:45:58.200769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val= 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val= 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val= 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val=0x1 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val= 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val= 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val=decompress 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val= 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val=software 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val=32 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val=32 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val=1 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val=Yes 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val= 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:51.323 06:45:58 -- accel/accel.sh@21 -- # val= 00:05:51.323 06:45:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # IFS=: 00:05:51.323 06:45:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.702 06:45:59 -- accel/accel.sh@21 -- # val= 00:05:52.702 06:45:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.702 06:45:59 -- accel/accel.sh@20 -- # IFS=: 00:05:52.702 06:45:59 -- accel/accel.sh@20 -- # read -r var val 00:05:52.702 06:45:59 -- accel/accel.sh@21 -- # val= 00:05:52.702 06:45:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.702 06:45:59 -- accel/accel.sh@20 -- # IFS=: 00:05:52.702 06:45:59 -- accel/accel.sh@20 -- # read -r var val 00:05:52.702 06:45:59 -- accel/accel.sh@21 -- # val= 00:05:52.702 06:45:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.702 06:45:59 -- accel/accel.sh@20 -- # IFS=: 00:05:52.702 06:45:59 -- accel/accel.sh@20 -- # read -r var val 00:05:52.702 06:45:59 -- accel/accel.sh@21 -- # val= 00:05:52.702 06:45:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.702 06:45:59 -- accel/accel.sh@20 -- # IFS=: 00:05:52.702 06:45:59 -- accel/accel.sh@20 -- # read -r var val 00:05:52.702 06:45:59 -- accel/accel.sh@21 -- # val= 00:05:52.702 06:45:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.702 06:45:59 -- accel/accel.sh@20 -- # IFS=: 00:05:52.702 06:45:59 -- accel/accel.sh@20 -- # read -r var val 00:05:52.702 06:45:59 -- accel/accel.sh@21 -- # val= 00:05:52.702 06:45:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.702 06:45:59 -- accel/accel.sh@20 -- # IFS=: 00:05:52.702 06:45:59 -- accel/accel.sh@20 -- # read -r var val 00:05:52.702 06:45:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.702 06:45:59 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:52.702 06:45:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.702 00:05:52.702 real 0m2.964s 00:05:52.702 user 0m2.675s 00:05:52.702 sys 0m0.280s 00:05:52.702 06:45:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.702 06:45:59 -- common/autotest_common.sh@10 -- # set +x 00:05:52.702 ************************************ 00:05:52.702 END TEST accel_decomp 00:05:52.702 ************************************ 00:05:52.702 06:45:59 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.702 06:45:59 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:52.702 06:45:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.703 06:45:59 -- common/autotest_common.sh@10 -- # set +x 00:05:52.703 ************************************ 00:05:52.703 START TEST accel_decmop_full 00:05:52.703 ************************************ 00:05:52.703 06:45:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.703 06:45:59 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.703 06:45:59 -- accel/accel.sh@17 -- # local accel_module 00:05:52.703 06:45:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.703 06:45:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.703 06:45:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.703 06:45:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.703 06:45:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.703 06:45:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.703 06:45:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.703 06:45:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.703 06:45:59 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.703 06:45:59 -- accel/accel.sh@42 -- # jq -r . 00:05:52.703 [2024-05-12 06:45:59.515130] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:52.703 [2024-05-12 06:45:59.515210] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922940 ] 00:05:52.703 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.703 [2024-05-12 06:45:59.581573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.703 [2024-05-12 06:45:59.701932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.081 06:46:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:54.081 00:05:54.081 SPDK Configuration: 00:05:54.081 Core mask: 0x1 00:05:54.081 00:05:54.081 Accel Perf Configuration: 00:05:54.081 Workload Type: decompress 00:05:54.081 Transfer size: 111250 bytes 00:05:54.081 Vector count 1 00:05:54.081 Module: software 00:05:54.081 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.081 Queue depth: 32 00:05:54.081 Allocate depth: 32 00:05:54.081 # threads/core: 1 00:05:54.081 Run time: 1 seconds 00:05:54.081 Verify: Yes 00:05:54.081 00:05:54.081 Running for 1 seconds... 00:05:54.081 00:05:54.081 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:54.081 ------------------------------------------------------------------------------------ 00:05:54.081 0,0 3776/s 155 MiB/s 0 0 00:05:54.081 ==================================================================================== 00:05:54.081 Total 3776/s 400 MiB/s 0 0' 00:05:54.081 06:46:00 -- accel/accel.sh@20 -- # IFS=: 00:05:54.081 06:46:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:54.081 06:46:00 -- accel/accel.sh@20 -- # read -r var val 00:05:54.081 06:46:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:54.081 06:46:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.081 06:46:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.081 06:46:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.081 06:46:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.081 06:46:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.081 06:46:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.081 06:46:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.081 06:46:00 -- accel/accel.sh@42 -- # jq -r . 00:05:54.081 [2024-05-12 06:46:01.012681] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:54.081 [2024-05-12 06:46:01.012778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923140 ] 00:05:54.081 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.081 [2024-05-12 06:46:01.078562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.081 [2024-05-12 06:46:01.198718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val= 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val= 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val= 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val=0x1 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val= 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val= 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val=decompress 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val= 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val=software 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@23 -- # accel_module=software 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val=32 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val=32 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val=1 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val=Yes 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val= 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:54.339 06:46:01 -- accel/accel.sh@21 -- # val= 00:05:54.339 06:46:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # IFS=: 00:05:54.339 06:46:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.718 06:46:02 -- accel/accel.sh@21 -- # val= 00:05:55.718 06:46:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.718 06:46:02 -- accel/accel.sh@20 -- # IFS=: 00:05:55.718 06:46:02 -- accel/accel.sh@20 -- # read -r var val 00:05:55.718 06:46:02 -- accel/accel.sh@21 -- # val= 00:05:55.718 06:46:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.718 06:46:02 -- accel/accel.sh@20 -- # IFS=: 00:05:55.718 06:46:02 -- accel/accel.sh@20 -- # read -r var val 00:05:55.718 06:46:02 -- accel/accel.sh@21 -- # val= 00:05:55.718 06:46:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.718 06:46:02 -- accel/accel.sh@20 -- # IFS=: 00:05:55.718 06:46:02 -- accel/accel.sh@20 -- # read -r var val 00:05:55.718 06:46:02 -- accel/accel.sh@21 -- # val= 00:05:55.718 06:46:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.718 06:46:02 -- accel/accel.sh@20 -- # IFS=: 00:05:55.718 06:46:02 -- accel/accel.sh@20 -- # read -r var val 00:05:55.718 06:46:02 -- accel/accel.sh@21 -- # val= 00:05:55.718 06:46:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.718 06:46:02 -- accel/accel.sh@20 -- # IFS=: 00:05:55.718 06:46:02 -- accel/accel.sh@20 -- # read -r var val 00:05:55.718 06:46:02 -- accel/accel.sh@21 -- # val= 00:05:55.718 06:46:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.718 06:46:02 -- accel/accel.sh@20 -- # IFS=: 00:05:55.718 06:46:02 -- accel/accel.sh@20 -- # read -r var val 00:05:55.718 06:46:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:55.718 06:46:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:55.718 06:46:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.718 00:05:55.718 real 0m3.004s 00:05:55.718 user 0m2.708s 00:05:55.718 sys 0m0.287s 00:05:55.718 06:46:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.718 06:46:02 -- common/autotest_common.sh@10 -- # set +x 00:05:55.718 ************************************ 00:05:55.718 END TEST accel_decmop_full 00:05:55.718 ************************************ 00:05:55.718 06:46:02 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:55.718 06:46:02 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:55.719 06:46:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.719 06:46:02 -- common/autotest_common.sh@10 -- # set +x 00:05:55.719 ************************************ 00:05:55.719 START TEST accel_decomp_mcore 00:05:55.719 ************************************ 00:05:55.719 06:46:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:55.719 06:46:02 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.719 06:46:02 -- accel/accel.sh@17 -- # local accel_module 00:05:55.719 06:46:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:55.719 06:46:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:55.719 06:46:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.719 06:46:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.719 06:46:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.719 06:46:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.719 06:46:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.719 06:46:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.719 06:46:02 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.719 06:46:02 -- accel/accel.sh@42 -- # jq -r . 00:05:55.719 [2024-05-12 06:46:02.548231] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:55.719 [2024-05-12 06:46:02.548325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923359 ] 00:05:55.719 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.719 [2024-05-12 06:46:02.613221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.719 [2024-05-12 06:46:02.733711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.719 [2024-05-12 06:46:02.733733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.719 [2024-05-12 06:46:02.733785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.719 [2024-05-12 06:46:02.733788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.095 06:46:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:57.095 00:05:57.095 SPDK Configuration: 00:05:57.095 Core mask: 0xf 00:05:57.095 00:05:57.095 Accel Perf Configuration: 00:05:57.095 Workload Type: decompress 00:05:57.095 Transfer size: 4096 bytes 00:05:57.096 Vector count 1 00:05:57.096 Module: software 00:05:57.096 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.096 Queue depth: 32 00:05:57.096 Allocate depth: 32 00:05:57.096 # threads/core: 1 00:05:57.096 Run time: 1 seconds 00:05:57.096 Verify: Yes 00:05:57.096 00:05:57.096 Running for 1 seconds... 00:05:57.096 00:05:57.096 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:57.096 ------------------------------------------------------------------------------------ 00:05:57.096 0,0 50464/s 92 MiB/s 0 0 00:05:57.096 3,0 50944/s 93 MiB/s 0 0 00:05:57.096 2,0 50880/s 93 MiB/s 0 0 00:05:57.096 1,0 50816/s 93 MiB/s 0 0 00:05:57.096 ==================================================================================== 00:05:57.096 Total 203104/s 793 MiB/s 0 0' 00:05:57.096 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.096 06:46:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:57.096 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.096 06:46:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:57.096 06:46:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.096 06:46:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.096 06:46:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.096 06:46:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.096 06:46:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.096 06:46:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.096 06:46:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.096 06:46:04 -- accel/accel.sh@42 -- # jq -r . 00:05:57.096 [2024-05-12 06:46:04.044162] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:57.096 [2024-05-12 06:46:04.044241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923511 ] 00:05:57.096 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.096 [2024-05-12 06:46:04.105852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.354 [2024-05-12 06:46:04.226084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.354 [2024-05-12 06:46:04.226139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.354 [2024-05-12 06:46:04.226193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.354 [2024-05-12 06:46:04.226197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.354 06:46:04 -- accel/accel.sh@21 -- # val= 00:05:57.354 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.354 06:46:04 -- accel/accel.sh@21 -- # val= 00:05:57.354 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.354 06:46:04 -- accel/accel.sh@21 -- # val= 00:05:57.354 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.354 06:46:04 -- accel/accel.sh@21 -- # val=0xf 00:05:57.354 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.354 06:46:04 -- accel/accel.sh@21 -- # val= 00:05:57.354 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.354 06:46:04 -- accel/accel.sh@21 -- # val= 00:05:57.354 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.354 06:46:04 -- accel/accel.sh@21 -- # val=decompress 00:05:57.354 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.354 06:46:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.354 06:46:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.354 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.354 06:46:04 -- accel/accel.sh@21 -- # val= 00:05:57.354 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.354 06:46:04 -- accel/accel.sh@21 -- # val=software 00:05:57.354 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.354 06:46:04 -- accel/accel.sh@23 -- # accel_module=software 00:05:57.354 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.355 06:46:04 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.355 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.355 06:46:04 -- accel/accel.sh@21 -- # val=32 00:05:57.355 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.355 06:46:04 -- accel/accel.sh@21 -- # val=32 00:05:57.355 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.355 06:46:04 -- accel/accel.sh@21 -- # val=1 00:05:57.355 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.355 06:46:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:57.355 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.355 06:46:04 -- accel/accel.sh@21 -- # val=Yes 00:05:57.355 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.355 06:46:04 -- accel/accel.sh@21 -- # val= 00:05:57.355 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:57.355 06:46:04 -- accel/accel.sh@21 -- # val= 00:05:57.355 06:46:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # IFS=: 00:05:57.355 06:46:04 -- accel/accel.sh@20 -- # read -r var val 00:05:58.736 06:46:05 -- accel/accel.sh@21 -- # val= 00:05:58.736 06:46:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # IFS=: 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # read -r var val 00:05:58.736 06:46:05 -- accel/accel.sh@21 -- # val= 00:05:58.736 06:46:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # IFS=: 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # read -r var val 00:05:58.736 06:46:05 -- accel/accel.sh@21 -- # val= 00:05:58.736 06:46:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # IFS=: 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # read -r var val 00:05:58.736 06:46:05 -- accel/accel.sh@21 -- # val= 00:05:58.736 06:46:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # IFS=: 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # read -r var val 00:05:58.736 06:46:05 -- accel/accel.sh@21 -- # val= 00:05:58.736 06:46:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # IFS=: 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # read -r var val 00:05:58.736 06:46:05 -- accel/accel.sh@21 -- # val= 00:05:58.736 06:46:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # IFS=: 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # read -r var val 00:05:58.736 06:46:05 -- accel/accel.sh@21 -- # val= 00:05:58.736 06:46:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # IFS=: 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # read -r var val 00:05:58.736 06:46:05 -- accel/accel.sh@21 -- # val= 00:05:58.736 06:46:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # IFS=: 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # read -r var val 00:05:58.736 06:46:05 -- accel/accel.sh@21 -- # val= 00:05:58.736 06:46:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.736 06:46:05 -- accel/accel.sh@20 -- # IFS=: 00:05:58.737 06:46:05 -- accel/accel.sh@20 -- # read -r var val 00:05:58.737 06:46:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:58.737 06:46:05 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:58.737 06:46:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.737 00:05:58.737 real 0m2.983s 00:05:58.737 user 0m9.573s 00:05:58.737 sys 0m0.310s 00:05:58.737 06:46:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.737 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.737 ************************************ 00:05:58.737 END TEST accel_decomp_mcore 00:05:58.737 ************************************ 00:05:58.737 06:46:05 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.737 06:46:05 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:58.737 06:46:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.737 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.737 ************************************ 00:05:58.737 START TEST accel_decomp_full_mcore 00:05:58.737 ************************************ 00:05:58.737 06:46:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.737 06:46:05 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.737 06:46:05 -- accel/accel.sh@17 -- # local accel_module 00:05:58.737 06:46:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.737 06:46:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.737 06:46:05 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.737 06:46:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.737 06:46:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.737 06:46:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.737 06:46:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.737 06:46:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.737 06:46:05 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.737 06:46:05 -- accel/accel.sh@42 -- # jq -r . 00:05:58.737 [2024-05-12 06:46:05.558595] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:58.737 [2024-05-12 06:46:05.558673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923704 ] 00:05:58.737 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.737 [2024-05-12 06:46:05.621879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.737 [2024-05-12 06:46:05.744599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.737 [2024-05-12 06:46:05.744654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.737 [2024-05-12 06:46:05.744717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.737 [2024-05-12 06:46:05.744721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.113 06:46:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:00.113 00:06:00.113 SPDK Configuration: 00:06:00.113 Core mask: 0xf 00:06:00.113 00:06:00.113 Accel Perf Configuration: 00:06:00.113 Workload Type: decompress 00:06:00.113 Transfer size: 111250 bytes 00:06:00.113 Vector count 1 00:06:00.113 Module: software 00:06:00.113 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.113 Queue depth: 32 00:06:00.113 Allocate depth: 32 00:06:00.113 # threads/core: 1 00:06:00.113 Run time: 1 seconds 00:06:00.113 Verify: Yes 00:06:00.113 00:06:00.113 Running for 1 seconds... 00:06:00.113 00:06:00.113 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.113 ------------------------------------------------------------------------------------ 00:06:00.113 0,0 3776/s 155 MiB/s 0 0 00:06:00.113 3,0 3776/s 155 MiB/s 0 0 00:06:00.113 2,0 3776/s 155 MiB/s 0 0 00:06:00.113 1,0 3776/s 155 MiB/s 0 0 00:06:00.113 ==================================================================================== 00:06:00.113 Total 15104/s 1602 MiB/s 0 0' 00:06:00.113 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.113 06:46:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:00.113 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.113 06:46:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:00.113 06:46:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.113 06:46:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.113 06:46:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.113 06:46:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.113 06:46:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.113 06:46:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.113 06:46:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.113 06:46:07 -- accel/accel.sh@42 -- # jq -r . 00:06:00.113 [2024-05-12 06:46:07.069363] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:00.113 [2024-05-12 06:46:07.069445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923940 ] 00:06:00.113 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.113 [2024-05-12 06:46:07.132480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.374 [2024-05-12 06:46:07.255706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.374 [2024-05-12 06:46:07.255752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.374 [2024-05-12 06:46:07.255805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.374 [2024-05-12 06:46:07.255809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val= 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val= 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val= 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val=0xf 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val= 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val= 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val=decompress 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val= 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val=software 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val=32 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val=32 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val=1 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val=Yes 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val= 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:00.374 06:46:07 -- accel/accel.sh@21 -- # val= 00:06:00.374 06:46:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # IFS=: 00:06:00.374 06:46:07 -- accel/accel.sh@20 -- # read -r var val 00:06:01.753 06:46:08 -- accel/accel.sh@21 -- # val= 00:06:01.753 06:46:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # IFS=: 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # read -r var val 00:06:01.753 06:46:08 -- accel/accel.sh@21 -- # val= 00:06:01.753 06:46:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # IFS=: 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # read -r var val 00:06:01.753 06:46:08 -- accel/accel.sh@21 -- # val= 00:06:01.753 06:46:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # IFS=: 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # read -r var val 00:06:01.753 06:46:08 -- accel/accel.sh@21 -- # val= 00:06:01.753 06:46:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # IFS=: 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # read -r var val 00:06:01.753 06:46:08 -- accel/accel.sh@21 -- # val= 00:06:01.753 06:46:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # IFS=: 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # read -r var val 00:06:01.753 06:46:08 -- accel/accel.sh@21 -- # val= 00:06:01.753 06:46:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # IFS=: 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # read -r var val 00:06:01.753 06:46:08 -- accel/accel.sh@21 -- # val= 00:06:01.753 06:46:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # IFS=: 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # read -r var val 00:06:01.753 06:46:08 -- accel/accel.sh@21 -- # val= 00:06:01.753 06:46:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # IFS=: 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # read -r var val 00:06:01.753 06:46:08 -- accel/accel.sh@21 -- # val= 00:06:01.753 06:46:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # IFS=: 00:06:01.753 06:46:08 -- accel/accel.sh@20 -- # read -r var val 00:06:01.753 06:46:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:01.753 06:46:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:01.753 06:46:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.753 00:06:01.753 real 0m3.016s 00:06:01.753 user 0m9.692s 00:06:01.753 sys 0m0.315s 00:06:01.753 06:46:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.753 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:06:01.753 ************************************ 00:06:01.753 END TEST accel_decomp_full_mcore 00:06:01.753 ************************************ 00:06:01.753 06:46:08 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:01.753 06:46:08 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:01.753 06:46:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.753 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:06:01.753 ************************************ 00:06:01.753 START TEST accel_decomp_mthread 00:06:01.753 ************************************ 00:06:01.753 06:46:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:01.753 06:46:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.753 06:46:08 -- accel/accel.sh@17 -- # local accel_module 00:06:01.753 06:46:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:01.753 06:46:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:01.753 06:46:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.753 06:46:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.753 06:46:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.753 06:46:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.753 06:46:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.753 06:46:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.753 06:46:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.753 06:46:08 -- accel/accel.sh@42 -- # jq -r . 00:06:01.753 [2024-05-12 06:46:08.601029] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:01.753 [2024-05-12 06:46:08.601109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924103 ] 00:06:01.753 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.753 [2024-05-12 06:46:08.667984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.753 [2024-05-12 06:46:08.787162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.135 06:46:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:03.135 00:06:03.135 SPDK Configuration: 00:06:03.135 Core mask: 0x1 00:06:03.135 00:06:03.135 Accel Perf Configuration: 00:06:03.135 Workload Type: decompress 00:06:03.135 Transfer size: 4096 bytes 00:06:03.135 Vector count 1 00:06:03.135 Module: software 00:06:03.135 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:03.135 Queue depth: 32 00:06:03.135 Allocate depth: 32 00:06:03.136 # threads/core: 2 00:06:03.136 Run time: 1 seconds 00:06:03.136 Verify: Yes 00:06:03.136 00:06:03.136 Running for 1 seconds... 00:06:03.136 00:06:03.136 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:03.136 ------------------------------------------------------------------------------------ 00:06:03.136 0,1 28128/s 51 MiB/s 0 0 00:06:03.136 0,0 28000/s 51 MiB/s 0 0 00:06:03.136 ==================================================================================== 00:06:03.136 Total 56128/s 219 MiB/s 0 0' 00:06:03.136 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.136 06:46:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:03.136 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.136 06:46:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:03.136 06:46:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.136 06:46:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.136 06:46:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.136 06:46:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.136 06:46:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.136 06:46:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.136 06:46:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.136 06:46:10 -- accel/accel.sh@42 -- # jq -r . 00:06:03.136 [2024-05-12 06:46:10.081846] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:03.136 [2024-05-12 06:46:10.081926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924263 ] 00:06:03.136 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.136 [2024-05-12 06:46:10.149858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.396 [2024-05-12 06:46:10.272141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val= 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val= 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val= 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val=0x1 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val= 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val= 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val=decompress 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val= 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val=software 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val=32 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val=32 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val=2 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val=Yes 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val= 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:03.396 06:46:10 -- accel/accel.sh@21 -- # val= 00:06:03.396 06:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # IFS=: 00:06:03.396 06:46:10 -- accel/accel.sh@20 -- # read -r var val 00:06:04.776 06:46:11 -- accel/accel.sh@21 -- # val= 00:06:04.776 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.776 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:06:04.776 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:06:04.776 06:46:11 -- accel/accel.sh@21 -- # val= 00:06:04.776 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.776 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:06:04.776 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:06:04.776 06:46:11 -- accel/accel.sh@21 -- # val= 00:06:04.776 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.777 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:06:04.777 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:06:04.777 06:46:11 -- accel/accel.sh@21 -- # val= 00:06:04.777 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.777 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:06:04.777 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:06:04.777 06:46:11 -- accel/accel.sh@21 -- # val= 00:06:04.777 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.777 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:06:04.777 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:06:04.777 06:46:11 -- accel/accel.sh@21 -- # val= 00:06:04.777 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.777 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:06:04.777 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:06:04.777 06:46:11 -- accel/accel.sh@21 -- # val= 00:06:04.777 06:46:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.777 06:46:11 -- accel/accel.sh@20 -- # IFS=: 00:06:04.777 06:46:11 -- accel/accel.sh@20 -- # read -r var val 00:06:04.777 06:46:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:04.777 06:46:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:04.777 06:46:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.777 00:06:04.777 real 0m2.984s 00:06:04.777 user 0m2.689s 00:06:04.777 sys 0m0.286s 00:06:04.777 06:46:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.777 06:46:11 -- common/autotest_common.sh@10 -- # set +x 00:06:04.777 ************************************ 00:06:04.777 END TEST accel_decomp_mthread 00:06:04.777 ************************************ 00:06:04.777 06:46:11 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.777 06:46:11 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:04.777 06:46:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.777 06:46:11 -- common/autotest_common.sh@10 -- # set +x 00:06:04.777 ************************************ 00:06:04.777 START TEST accel_deomp_full_mthread 00:06:04.777 ************************************ 00:06:04.777 06:46:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.777 06:46:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.777 06:46:11 -- accel/accel.sh@17 -- # local accel_module 00:06:04.777 06:46:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.777 06:46:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.777 06:46:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.777 06:46:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.777 06:46:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.777 06:46:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.777 06:46:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.777 06:46:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.777 06:46:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.777 06:46:11 -- accel/accel.sh@42 -- # jq -r . 00:06:04.777 [2024-05-12 06:46:11.610975] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:04.777 [2024-05-12 06:46:11.611052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924526 ] 00:06:04.777 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.777 [2024-05-12 06:46:11.677464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.777 [2024-05-12 06:46:11.797845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.183 06:46:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:06.183 00:06:06.183 SPDK Configuration: 00:06:06.183 Core mask: 0x1 00:06:06.183 00:06:06.183 Accel Perf Configuration: 00:06:06.183 Workload Type: decompress 00:06:06.183 Transfer size: 111250 bytes 00:06:06.183 Vector count 1 00:06:06.183 Module: software 00:06:06.183 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.183 Queue depth: 32 00:06:06.183 Allocate depth: 32 00:06:06.183 # threads/core: 2 00:06:06.183 Run time: 1 seconds 00:06:06.183 Verify: Yes 00:06:06.183 00:06:06.183 Running for 1 seconds... 00:06:06.183 00:06:06.183 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.183 ------------------------------------------------------------------------------------ 00:06:06.183 0,1 1952/s 80 MiB/s 0 0 00:06:06.183 0,0 1920/s 79 MiB/s 0 0 00:06:06.183 ==================================================================================== 00:06:06.183 Total 3872/s 410 MiB/s 0 0' 00:06:06.183 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.183 06:46:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:06.183 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.183 06:46:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:06.183 06:46:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.183 06:46:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.183 06:46:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.183 06:46:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.183 06:46:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.183 06:46:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.183 06:46:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.183 06:46:13 -- accel/accel.sh@42 -- # jq -r . 00:06:06.183 [2024-05-12 06:46:13.135394] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:06.183 [2024-05-12 06:46:13.135475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924676 ] 00:06:06.183 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.183 [2024-05-12 06:46:13.198012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.442 [2024-05-12 06:46:13.318869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val= 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val= 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val= 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val=0x1 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val= 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val= 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val=decompress 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val= 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val=software 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val=32 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val=32 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val=2 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val=Yes 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val= 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:06.442 06:46:13 -- accel/accel.sh@21 -- # val= 00:06:06.442 06:46:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # IFS=: 00:06:06.442 06:46:13 -- accel/accel.sh@20 -- # read -r var val 00:06:07.827 06:46:14 -- accel/accel.sh@21 -- # val= 00:06:07.827 06:46:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.827 06:46:14 -- accel/accel.sh@20 -- # IFS=: 00:06:07.827 06:46:14 -- accel/accel.sh@20 -- # read -r var val 00:06:07.827 06:46:14 -- accel/accel.sh@21 -- # val= 00:06:07.827 06:46:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.827 06:46:14 -- accel/accel.sh@20 -- # IFS=: 00:06:07.827 06:46:14 -- accel/accel.sh@20 -- # read -r var val 00:06:07.827 06:46:14 -- accel/accel.sh@21 -- # val= 00:06:07.827 06:46:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.827 06:46:14 -- accel/accel.sh@20 -- # IFS=: 00:06:07.827 06:46:14 -- accel/accel.sh@20 -- # read -r var val 00:06:07.827 06:46:14 -- accel/accel.sh@21 -- # val= 00:06:07.827 06:46:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.827 06:46:14 -- accel/accel.sh@20 -- # IFS=: 00:06:07.827 06:46:14 -- accel/accel.sh@20 -- # read -r var val 00:06:07.827 06:46:14 -- accel/accel.sh@21 -- # val= 00:06:07.827 06:46:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.827 06:46:14 -- accel/accel.sh@20 -- # IFS=: 00:06:07.827 06:46:14 -- accel/accel.sh@20 -- # read -r var val 00:06:07.827 06:46:14 -- accel/accel.sh@21 -- # val= 00:06:07.827 06:46:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.827 06:46:14 -- accel/accel.sh@20 -- # IFS=: 00:06:07.827 06:46:14 -- accel/accel.sh@20 -- # read -r var val 00:06:07.827 06:46:14 -- accel/accel.sh@21 -- # val= 00:06:07.828 06:46:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.828 06:46:14 -- accel/accel.sh@20 -- # IFS=: 00:06:07.828 06:46:14 -- accel/accel.sh@20 -- # read -r var val 00:06:07.828 06:46:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:07.828 06:46:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:07.828 06:46:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.828 00:06:07.828 real 0m3.048s 00:06:07.828 user 0m2.748s 00:06:07.828 sys 0m0.293s 00:06:07.828 06:46:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.828 06:46:14 -- common/autotest_common.sh@10 -- # set +x 00:06:07.828 ************************************ 00:06:07.828 END TEST accel_deomp_full_mthread 00:06:07.828 ************************************ 00:06:07.828 06:46:14 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:07.828 06:46:14 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:07.828 06:46:14 -- accel/accel.sh@129 -- # build_accel_config 00:06:07.828 06:46:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:07.828 06:46:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.828 06:46:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.828 06:46:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.828 06:46:14 -- common/autotest_common.sh@10 -- # set +x 00:06:07.828 06:46:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.828 06:46:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.828 06:46:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.828 06:46:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.828 06:46:14 -- accel/accel.sh@42 -- # jq -r . 00:06:07.828 ************************************ 00:06:07.828 START TEST accel_dif_functional_tests 00:06:07.828 ************************************ 00:06:07.828 06:46:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:07.828 [2024-05-12 06:46:14.705400] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:07.828 [2024-05-12 06:46:14.705479] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924956 ] 00:06:07.828 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.828 [2024-05-12 06:46:14.767283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.828 [2024-05-12 06:46:14.894643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.828 [2024-05-12 06:46:14.894710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.828 [2024-05-12 06:46:14.894720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.088 00:06:08.088 00:06:08.088 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.088 http://cunit.sourceforge.net/ 00:06:08.088 00:06:08.088 00:06:08.088 Suite: accel_dif 00:06:08.088 Test: verify: DIF generated, GUARD check ...passed 00:06:08.088 Test: verify: DIF generated, APPTAG check ...passed 00:06:08.088 Test: verify: DIF generated, REFTAG check ...passed 00:06:08.088 Test: verify: DIF not generated, GUARD check ...[2024-05-12 06:46:14.996795] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:08.088 [2024-05-12 06:46:14.996866] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:08.088 passed 00:06:08.088 Test: verify: DIF not generated, APPTAG check ...[2024-05-12 06:46:14.996909] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:08.088 [2024-05-12 06:46:14.996940] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:08.088 passed 00:06:08.088 Test: verify: DIF not generated, REFTAG check ...[2024-05-12 06:46:14.996976] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:08.088 [2024-05-12 06:46:14.997004] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:08.088 passed 00:06:08.088 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:08.088 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-12 06:46:14.997073] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:08.088 passed 00:06:08.088 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:08.088 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:08.088 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:08.088 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-12 06:46:14.997231] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:08.088 passed 00:06:08.088 Test: generate copy: DIF generated, GUARD check ...passed 00:06:08.088 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:08.088 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:08.088 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:08.088 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:08.088 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:08.088 Test: generate copy: iovecs-len validate ...[2024-05-12 06:46:14.997491] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:08.088 passed 00:06:08.088 Test: generate copy: buffer alignment validate ...passed 00:06:08.088 00:06:08.088 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.088 suites 1 1 n/a 0 0 00:06:08.088 tests 20 20 20 0 0 00:06:08.088 asserts 204 204 204 0 n/a 00:06:08.088 00:06:08.088 Elapsed time = 0.003 seconds 00:06:08.348 00:06:08.348 real 0m0.598s 00:06:08.348 user 0m0.911s 00:06:08.348 sys 0m0.183s 00:06:08.348 06:46:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.348 06:46:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.348 ************************************ 00:06:08.348 END TEST accel_dif_functional_tests 00:06:08.348 ************************************ 00:06:08.348 00:06:08.348 real 1m3.176s 00:06:08.348 user 1m11.041s 00:06:08.348 sys 0m7.189s 00:06:08.348 06:46:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.348 06:46:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.348 ************************************ 00:06:08.348 END TEST accel 00:06:08.348 ************************************ 00:06:08.349 06:46:15 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:08.349 06:46:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.349 06:46:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.349 06:46:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.349 ************************************ 00:06:08.349 START TEST accel_rpc 00:06:08.349 ************************************ 00:06:08.349 06:46:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:08.349 * Looking for test storage... 00:06:08.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:08.349 06:46:15 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:08.349 06:46:15 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2925024 00:06:08.349 06:46:15 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:08.349 06:46:15 -- accel/accel_rpc.sh@15 -- # waitforlisten 2925024 00:06:08.349 06:46:15 -- common/autotest_common.sh@819 -- # '[' -z 2925024 ']' 00:06:08.349 06:46:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.349 06:46:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.349 06:46:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.349 06:46:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.349 06:46:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.349 [2024-05-12 06:46:15.413861] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:08.349 [2024-05-12 06:46:15.413948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2925024 ] 00:06:08.349 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.607 [2024-05-12 06:46:15.480131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.607 [2024-05-12 06:46:15.602322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.607 [2024-05-12 06:46:15.602507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.607 06:46:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.607 06:46:15 -- common/autotest_common.sh@852 -- # return 0 00:06:08.607 06:46:15 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:08.608 06:46:15 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:08.608 06:46:15 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:08.608 06:46:15 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:08.608 06:46:15 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:08.608 06:46:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.608 06:46:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.608 06:46:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.608 ************************************ 00:06:08.608 START TEST accel_assign_opcode 00:06:08.608 ************************************ 00:06:08.608 06:46:15 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:08.608 06:46:15 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:08.608 06:46:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.608 06:46:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.608 [2024-05-12 06:46:15.663098] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:08.608 06:46:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.608 06:46:15 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:08.608 06:46:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.608 06:46:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.608 [2024-05-12 06:46:15.671108] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:08.608 06:46:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.608 06:46:15 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:08.608 06:46:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.608 06:46:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.866 06:46:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.866 06:46:15 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:08.866 06:46:15 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:08.866 06:46:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.866 06:46:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.866 06:46:15 -- accel/accel_rpc.sh@42 -- # grep software 00:06:08.866 06:46:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.866 software 00:06:08.866 00:06:08.866 real 0m0.303s 00:06:08.866 user 0m0.035s 00:06:08.866 sys 0m0.007s 00:06:08.866 06:46:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.866 06:46:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.866 ************************************ 00:06:08.866 END TEST accel_assign_opcode 00:06:08.866 ************************************ 00:06:08.866 06:46:15 -- accel/accel_rpc.sh@55 -- # killprocess 2925024 00:06:08.866 06:46:15 -- common/autotest_common.sh@926 -- # '[' -z 2925024 ']' 00:06:08.866 06:46:15 -- common/autotest_common.sh@930 -- # kill -0 2925024 00:06:08.866 06:46:15 -- common/autotest_common.sh@931 -- # uname 00:06:08.866 06:46:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:08.866 06:46:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2925024 00:06:09.124 06:46:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:09.125 06:46:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:09.125 06:46:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2925024' 00:06:09.125 killing process with pid 2925024 00:06:09.125 06:46:16 -- common/autotest_common.sh@945 -- # kill 2925024 00:06:09.125 06:46:16 -- common/autotest_common.sh@950 -- # wait 2925024 00:06:09.383 00:06:09.383 real 0m1.167s 00:06:09.383 user 0m1.087s 00:06:09.383 sys 0m0.424s 00:06:09.383 06:46:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.383 06:46:16 -- common/autotest_common.sh@10 -- # set +x 00:06:09.383 ************************************ 00:06:09.383 END TEST accel_rpc 00:06:09.383 ************************************ 00:06:09.383 06:46:16 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:09.383 06:46:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:09.641 06:46:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.641 06:46:16 -- common/autotest_common.sh@10 -- # set +x 00:06:09.641 ************************************ 00:06:09.641 START TEST app_cmdline 00:06:09.641 ************************************ 00:06:09.641 06:46:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:09.641 * Looking for test storage... 00:06:09.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:09.641 06:46:16 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:09.641 06:46:16 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2925227 00:06:09.641 06:46:16 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:09.641 06:46:16 -- app/cmdline.sh@18 -- # waitforlisten 2925227 00:06:09.641 06:46:16 -- common/autotest_common.sh@819 -- # '[' -z 2925227 ']' 00:06:09.641 06:46:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.641 06:46:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:09.641 06:46:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.641 06:46:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:09.641 06:46:16 -- common/autotest_common.sh@10 -- # set +x 00:06:09.641 [2024-05-12 06:46:16.614708] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:09.641 [2024-05-12 06:46:16.614804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2925227 ] 00:06:09.641 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.641 [2024-05-12 06:46:16.677021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.900 [2024-05-12 06:46:16.794718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.900 [2024-05-12 06:46:16.794886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.476 06:46:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:10.476 06:46:17 -- common/autotest_common.sh@852 -- # return 0 00:06:10.476 06:46:17 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:10.734 { 00:06:10.734 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:06:10.734 "fields": { 00:06:10.734 "major": 24, 00:06:10.734 "minor": 1, 00:06:10.734 "patch": 1, 00:06:10.734 "suffix": "-pre", 00:06:10.734 "commit": "36faa8c31" 00:06:10.734 } 00:06:10.734 } 00:06:10.734 06:46:17 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:10.734 06:46:17 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:10.734 06:46:17 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:10.734 06:46:17 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:10.734 06:46:17 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:10.734 06:46:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:10.734 06:46:17 -- common/autotest_common.sh@10 -- # set +x 00:06:10.734 06:46:17 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:10.734 06:46:17 -- app/cmdline.sh@26 -- # sort 00:06:10.990 06:46:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:10.990 06:46:17 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:10.990 06:46:17 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:10.990 06:46:17 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.990 06:46:17 -- common/autotest_common.sh@640 -- # local es=0 00:06:10.990 06:46:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.990 06:46:17 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.990 06:46:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:10.990 06:46:17 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.990 06:46:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:10.990 06:46:17 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.990 06:46:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:10.990 06:46:17 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.990 06:46:17 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:10.990 06:46:17 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.247 request: 00:06:11.247 { 00:06:11.247 "method": "env_dpdk_get_mem_stats", 00:06:11.247 "req_id": 1 00:06:11.247 } 00:06:11.247 Got JSON-RPC error response 00:06:11.247 response: 00:06:11.247 { 00:06:11.247 "code": -32601, 00:06:11.247 "message": "Method not found" 00:06:11.247 } 00:06:11.247 06:46:18 -- common/autotest_common.sh@643 -- # es=1 00:06:11.247 06:46:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:11.247 06:46:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:11.247 06:46:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:11.247 06:46:18 -- app/cmdline.sh@1 -- # killprocess 2925227 00:06:11.247 06:46:18 -- common/autotest_common.sh@926 -- # '[' -z 2925227 ']' 00:06:11.247 06:46:18 -- common/autotest_common.sh@930 -- # kill -0 2925227 00:06:11.247 06:46:18 -- common/autotest_common.sh@931 -- # uname 00:06:11.247 06:46:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:11.247 06:46:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2925227 00:06:11.247 06:46:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:11.247 06:46:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:11.247 06:46:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2925227' 00:06:11.247 killing process with pid 2925227 00:06:11.247 06:46:18 -- common/autotest_common.sh@945 -- # kill 2925227 00:06:11.247 06:46:18 -- common/autotest_common.sh@950 -- # wait 2925227 00:06:11.814 00:06:11.814 real 0m2.172s 00:06:11.814 user 0m2.763s 00:06:11.814 sys 0m0.512s 00:06:11.814 06:46:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.814 06:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:11.814 ************************************ 00:06:11.814 END TEST app_cmdline 00:06:11.814 ************************************ 00:06:11.814 06:46:18 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:11.814 06:46:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.814 06:46:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.814 06:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:11.814 ************************************ 00:06:11.814 START TEST version 00:06:11.814 ************************************ 00:06:11.814 06:46:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:11.814 * Looking for test storage... 00:06:11.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:11.814 06:46:18 -- app/version.sh@17 -- # get_header_version major 00:06:11.814 06:46:18 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:11.814 06:46:18 -- app/version.sh@14 -- # cut -f2 00:06:11.814 06:46:18 -- app/version.sh@14 -- # tr -d '"' 00:06:11.814 06:46:18 -- app/version.sh@17 -- # major=24 00:06:11.814 06:46:18 -- app/version.sh@18 -- # get_header_version minor 00:06:11.814 06:46:18 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:11.814 06:46:18 -- app/version.sh@14 -- # cut -f2 00:06:11.814 06:46:18 -- app/version.sh@14 -- # tr -d '"' 00:06:11.814 06:46:18 -- app/version.sh@18 -- # minor=1 00:06:11.814 06:46:18 -- app/version.sh@19 -- # get_header_version patch 00:06:11.814 06:46:18 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:11.814 06:46:18 -- app/version.sh@14 -- # cut -f2 00:06:11.814 06:46:18 -- app/version.sh@14 -- # tr -d '"' 00:06:11.814 06:46:18 -- app/version.sh@19 -- # patch=1 00:06:11.814 06:46:18 -- app/version.sh@20 -- # get_header_version suffix 00:06:11.814 06:46:18 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:11.814 06:46:18 -- app/version.sh@14 -- # cut -f2 00:06:11.814 06:46:18 -- app/version.sh@14 -- # tr -d '"' 00:06:11.814 06:46:18 -- app/version.sh@20 -- # suffix=-pre 00:06:11.814 06:46:18 -- app/version.sh@22 -- # version=24.1 00:06:11.814 06:46:18 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:11.814 06:46:18 -- app/version.sh@25 -- # version=24.1.1 00:06:11.814 06:46:18 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:11.814 06:46:18 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:11.814 06:46:18 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:11.814 06:46:18 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:11.814 06:46:18 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:11.814 00:06:11.814 real 0m0.105s 00:06:11.814 user 0m0.053s 00:06:11.814 sys 0m0.074s 00:06:11.814 06:46:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.814 06:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:11.814 ************************************ 00:06:11.814 END TEST version 00:06:11.814 ************************************ 00:06:11.814 06:46:18 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:11.814 06:46:18 -- spdk/autotest.sh@204 -- # uname -s 00:06:11.814 06:46:18 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:11.814 06:46:18 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:11.814 06:46:18 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:11.814 06:46:18 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:11.814 06:46:18 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:11.814 06:46:18 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:11.814 06:46:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:11.814 06:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:11.814 06:46:18 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:11.814 06:46:18 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:11.814 06:46:18 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:11.814 06:46:18 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:11.814 06:46:18 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:11.814 06:46:18 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:11.814 06:46:18 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:11.814 06:46:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:11.814 06:46:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.814 06:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:11.814 ************************************ 00:06:11.814 START TEST nvmf_tcp 00:06:11.814 ************************************ 00:06:11.814 06:46:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:11.814 * Looking for test storage... 00:06:11.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:11.814 06:46:18 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:11.814 06:46:18 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:11.814 06:46:18 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.814 06:46:18 -- nvmf/common.sh@7 -- # uname -s 00:06:11.814 06:46:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.814 06:46:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.814 06:46:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.814 06:46:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.814 06:46:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.814 06:46:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.814 06:46:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.814 06:46:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.814 06:46:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.814 06:46:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.814 06:46:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.814 06:46:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.814 06:46:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.814 06:46:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.814 06:46:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.814 06:46:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.814 06:46:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.814 06:46:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.814 06:46:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.814 06:46:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.814 06:46:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.814 06:46:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.814 06:46:18 -- paths/export.sh@5 -- # export PATH 00:06:11.814 06:46:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.814 06:46:18 -- nvmf/common.sh@46 -- # : 0 00:06:11.814 06:46:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:11.814 06:46:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:11.814 06:46:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:11.814 06:46:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.814 06:46:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.814 06:46:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:11.814 06:46:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:11.814 06:46:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:11.814 06:46:18 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:11.814 06:46:18 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:11.814 06:46:18 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:11.814 06:46:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:11.814 06:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:11.814 06:46:18 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:11.814 06:46:18 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:11.814 06:46:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:11.814 06:46:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.814 06:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:11.814 ************************************ 00:06:11.814 START TEST nvmf_example 00:06:11.814 ************************************ 00:06:11.814 06:46:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:12.073 * Looking for test storage... 00:06:12.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:12.073 06:46:18 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.073 06:46:18 -- nvmf/common.sh@7 -- # uname -s 00:06:12.073 06:46:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.074 06:46:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.074 06:46:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.074 06:46:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.074 06:46:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.074 06:46:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.074 06:46:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.074 06:46:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.074 06:46:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.074 06:46:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.074 06:46:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.074 06:46:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.074 06:46:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.074 06:46:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.074 06:46:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.074 06:46:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.074 06:46:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.074 06:46:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.074 06:46:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.074 06:46:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.074 06:46:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.074 06:46:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.074 06:46:18 -- paths/export.sh@5 -- # export PATH 00:06:12.074 06:46:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.074 06:46:18 -- nvmf/common.sh@46 -- # : 0 00:06:12.074 06:46:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:12.074 06:46:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:12.074 06:46:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:12.074 06:46:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.074 06:46:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.074 06:46:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:12.074 06:46:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:12.074 06:46:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:12.074 06:46:18 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:12.074 06:46:18 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:12.074 06:46:18 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:12.074 06:46:18 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:12.074 06:46:18 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:12.074 06:46:18 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:12.074 06:46:18 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:12.074 06:46:18 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:12.074 06:46:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:12.074 06:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.074 06:46:18 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:12.074 06:46:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:12.074 06:46:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:12.074 06:46:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:12.074 06:46:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:12.074 06:46:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:12.074 06:46:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.074 06:46:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:12.074 06:46:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.074 06:46:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:12.074 06:46:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:12.074 06:46:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:12.074 06:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.975 06:46:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:13.975 06:46:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:13.975 06:46:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:13.975 06:46:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:13.975 06:46:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:13.975 06:46:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:13.975 06:46:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:13.975 06:46:20 -- nvmf/common.sh@294 -- # net_devs=() 00:06:13.975 06:46:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:13.975 06:46:20 -- nvmf/common.sh@295 -- # e810=() 00:06:13.975 06:46:20 -- nvmf/common.sh@295 -- # local -ga e810 00:06:13.975 06:46:20 -- nvmf/common.sh@296 -- # x722=() 00:06:13.975 06:46:20 -- nvmf/common.sh@296 -- # local -ga x722 00:06:13.975 06:46:20 -- nvmf/common.sh@297 -- # mlx=() 00:06:13.975 06:46:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:13.975 06:46:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.975 06:46:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.975 06:46:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.975 06:46:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.975 06:46:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.975 06:46:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.975 06:46:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.975 06:46:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.975 06:46:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.975 06:46:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.975 06:46:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.975 06:46:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:13.975 06:46:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:13.975 06:46:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:13.975 06:46:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:13.975 06:46:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:13.975 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:13.975 06:46:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:13.975 06:46:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:13.975 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:13.975 06:46:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:13.975 06:46:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:13.975 06:46:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.975 06:46:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:13.975 06:46:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.975 06:46:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:13.975 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:13.975 06:46:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.975 06:46:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:13.975 06:46:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.975 06:46:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:13.975 06:46:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.975 06:46:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:13.975 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:13.975 06:46:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.975 06:46:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:13.975 06:46:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:13.975 06:46:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:13.975 06:46:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:13.976 06:46:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:13.976 06:46:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.976 06:46:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.976 06:46:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.976 06:46:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:13.976 06:46:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.976 06:46:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.976 06:46:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:13.976 06:46:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.976 06:46:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.976 06:46:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:13.976 06:46:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:13.976 06:46:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.976 06:46:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.976 06:46:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.976 06:46:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.976 06:46:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:13.976 06:46:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.976 06:46:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.976 06:46:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:13.976 06:46:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:13.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:06:13.976 00:06:13.976 --- 10.0.0.2 ping statistics --- 00:06:13.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.976 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:06:13.976 06:46:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:13.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:06:13.976 00:06:13.976 --- 10.0.0.1 ping statistics --- 00:06:13.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.976 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:06:13.976 06:46:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.976 06:46:21 -- nvmf/common.sh@410 -- # return 0 00:06:13.976 06:46:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:13.976 06:46:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.976 06:46:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:13.976 06:46:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:13.976 06:46:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.976 06:46:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:13.976 06:46:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:14.234 06:46:21 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:14.234 06:46:21 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:14.234 06:46:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:14.234 06:46:21 -- common/autotest_common.sh@10 -- # set +x 00:06:14.234 06:46:21 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:14.234 06:46:21 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:14.234 06:46:21 -- target/nvmf_example.sh@34 -- # nvmfpid=2927266 00:06:14.234 06:46:21 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:14.234 06:46:21 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:14.234 06:46:21 -- target/nvmf_example.sh@36 -- # waitforlisten 2927266 00:06:14.234 06:46:21 -- common/autotest_common.sh@819 -- # '[' -z 2927266 ']' 00:06:14.234 06:46:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.234 06:46:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.234 06:46:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.234 06:46:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.234 06:46:21 -- common/autotest_common.sh@10 -- # set +x 00:06:14.234 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.165 06:46:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.165 06:46:22 -- common/autotest_common.sh@852 -- # return 0 00:06:15.165 06:46:22 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:15.165 06:46:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:15.165 06:46:22 -- common/autotest_common.sh@10 -- # set +x 00:06:15.165 06:46:22 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:15.165 06:46:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.165 06:46:22 -- common/autotest_common.sh@10 -- # set +x 00:06:15.165 06:46:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.165 06:46:22 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:15.165 06:46:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.165 06:46:22 -- common/autotest_common.sh@10 -- # set +x 00:06:15.165 06:46:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.165 06:46:22 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:15.165 06:46:22 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:15.165 06:46:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.166 06:46:22 -- common/autotest_common.sh@10 -- # set +x 00:06:15.166 06:46:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.166 06:46:22 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:15.166 06:46:22 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:15.166 06:46:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.166 06:46:22 -- common/autotest_common.sh@10 -- # set +x 00:06:15.166 06:46:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.166 06:46:22 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.166 06:46:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.166 06:46:22 -- common/autotest_common.sh@10 -- # set +x 00:06:15.166 06:46:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.166 06:46:22 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:15.166 06:46:22 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:15.166 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.358 Initializing NVMe Controllers 00:06:27.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:27.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:27.358 Initialization complete. Launching workers. 00:06:27.358 ======================================================== 00:06:27.358 Latency(us) 00:06:27.358 Device Information : IOPS MiB/s Average min max 00:06:27.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14876.04 58.11 4301.85 649.88 15318.23 00:06:27.359 ======================================================== 00:06:27.359 Total : 14876.04 58.11 4301.85 649.88 15318.23 00:06:27.359 00:06:27.359 06:46:32 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:27.359 06:46:32 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:27.359 06:46:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:27.359 06:46:32 -- nvmf/common.sh@116 -- # sync 00:06:27.359 06:46:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:27.359 06:46:32 -- nvmf/common.sh@119 -- # set +e 00:06:27.359 06:46:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:27.359 06:46:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:27.359 rmmod nvme_tcp 00:06:27.359 rmmod nvme_fabrics 00:06:27.359 rmmod nvme_keyring 00:06:27.359 06:46:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:27.359 06:46:32 -- nvmf/common.sh@123 -- # set -e 00:06:27.359 06:46:32 -- nvmf/common.sh@124 -- # return 0 00:06:27.359 06:46:32 -- nvmf/common.sh@477 -- # '[' -n 2927266 ']' 00:06:27.359 06:46:32 -- nvmf/common.sh@478 -- # killprocess 2927266 00:06:27.359 06:46:32 -- common/autotest_common.sh@926 -- # '[' -z 2927266 ']' 00:06:27.359 06:46:32 -- common/autotest_common.sh@930 -- # kill -0 2927266 00:06:27.359 06:46:32 -- common/autotest_common.sh@931 -- # uname 00:06:27.359 06:46:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.359 06:46:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2927266 00:06:27.359 06:46:32 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:06:27.359 06:46:32 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:06:27.359 06:46:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2927266' 00:06:27.359 killing process with pid 2927266 00:06:27.359 06:46:32 -- common/autotest_common.sh@945 -- # kill 2927266 00:06:27.359 06:46:32 -- common/autotest_common.sh@950 -- # wait 2927266 00:06:27.359 nvmf threads initialize successfully 00:06:27.359 bdev subsystem init successfully 00:06:27.359 created a nvmf target service 00:06:27.359 create targets's poll groups done 00:06:27.359 all subsystems of target started 00:06:27.359 nvmf target is running 00:06:27.359 all subsystems of target stopped 00:06:27.359 destroy targets's poll groups done 00:06:27.359 destroyed the nvmf target service 00:06:27.359 bdev subsystem finish successfully 00:06:27.359 nvmf threads destroy successfully 00:06:27.359 06:46:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:27.359 06:46:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:27.359 06:46:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:27.359 06:46:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:27.359 06:46:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:27.359 06:46:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.359 06:46:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:27.359 06:46:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.930 06:46:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:06:27.930 06:46:34 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:27.930 06:46:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:27.930 06:46:34 -- common/autotest_common.sh@10 -- # set +x 00:06:27.930 00:06:27.930 real 0m15.860s 00:06:27.930 user 0m44.308s 00:06:27.930 sys 0m3.599s 00:06:27.930 06:46:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.930 06:46:34 -- common/autotest_common.sh@10 -- # set +x 00:06:27.930 ************************************ 00:06:27.930 END TEST nvmf_example 00:06:27.930 ************************************ 00:06:27.930 06:46:34 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:27.930 06:46:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:27.930 06:46:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.930 06:46:34 -- common/autotest_common.sh@10 -- # set +x 00:06:27.930 ************************************ 00:06:27.930 START TEST nvmf_filesystem 00:06:27.930 ************************************ 00:06:27.930 06:46:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:27.930 * Looking for test storage... 00:06:27.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.930 06:46:34 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:27.930 06:46:34 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:27.930 06:46:34 -- common/autotest_common.sh@34 -- # set -e 00:06:27.930 06:46:34 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:27.930 06:46:34 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:27.930 06:46:34 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:27.930 06:46:34 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:27.930 06:46:34 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:27.930 06:46:34 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:27.930 06:46:34 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:27.930 06:46:34 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:27.930 06:46:34 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:27.930 06:46:34 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:27.930 06:46:34 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:27.930 06:46:34 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:27.930 06:46:34 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:27.930 06:46:34 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:27.930 06:46:34 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:27.930 06:46:34 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:27.930 06:46:34 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:27.930 06:46:34 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:27.930 06:46:34 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:27.930 06:46:34 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:27.930 06:46:34 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:27.930 06:46:34 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:27.930 06:46:34 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:27.930 06:46:34 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:27.930 06:46:34 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:27.930 06:46:34 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:27.930 06:46:34 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:27.930 06:46:34 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:27.930 06:46:34 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:27.930 06:46:34 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:27.930 06:46:34 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:27.930 06:46:34 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:27.930 06:46:34 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:27.930 06:46:34 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:27.930 06:46:34 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:27.930 06:46:34 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:27.930 06:46:34 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:27.930 06:46:34 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:27.930 06:46:34 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:27.930 06:46:34 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:27.930 06:46:34 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:27.930 06:46:34 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:27.930 06:46:34 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:27.930 06:46:34 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:27.930 06:46:34 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:27.930 06:46:34 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:27.930 06:46:34 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:27.930 06:46:34 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:27.930 06:46:34 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:27.930 06:46:34 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:27.930 06:46:34 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:27.930 06:46:34 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:27.930 06:46:34 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:27.930 06:46:34 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:27.930 06:46:34 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:27.930 06:46:34 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:27.930 06:46:34 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:27.930 06:46:34 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:27.930 06:46:34 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:27.930 06:46:34 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:27.930 06:46:34 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:27.930 06:46:34 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:27.930 06:46:34 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:27.930 06:46:34 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:06:27.930 06:46:34 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:27.930 06:46:34 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:27.930 06:46:34 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:27.930 06:46:34 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:27.930 06:46:34 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:27.930 06:46:34 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:27.930 06:46:34 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:27.930 06:46:34 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:27.930 06:46:34 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:27.930 06:46:34 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:27.930 06:46:34 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:27.930 06:46:34 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:27.930 06:46:34 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:27.930 06:46:34 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:27.930 06:46:34 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:27.930 06:46:34 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:27.931 06:46:34 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:27.931 06:46:34 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:27.931 06:46:34 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:27.931 06:46:34 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:27.931 06:46:34 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:27.931 06:46:34 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:27.931 06:46:34 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:27.931 06:46:34 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:27.931 06:46:34 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:27.931 06:46:34 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:27.931 06:46:34 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:27.931 06:46:34 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:27.931 06:46:34 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:27.931 06:46:34 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:27.931 06:46:34 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:27.931 06:46:34 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:27.931 06:46:34 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:27.931 06:46:34 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:27.931 06:46:34 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:27.931 #define SPDK_CONFIG_H 00:06:27.931 #define SPDK_CONFIG_APPS 1 00:06:27.931 #define SPDK_CONFIG_ARCH native 00:06:27.931 #undef SPDK_CONFIG_ASAN 00:06:27.931 #undef SPDK_CONFIG_AVAHI 00:06:27.931 #undef SPDK_CONFIG_CET 00:06:27.931 #define SPDK_CONFIG_COVERAGE 1 00:06:27.931 #define SPDK_CONFIG_CROSS_PREFIX 00:06:27.931 #undef SPDK_CONFIG_CRYPTO 00:06:27.931 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:27.931 #undef SPDK_CONFIG_CUSTOMOCF 00:06:27.931 #undef SPDK_CONFIG_DAOS 00:06:27.931 #define SPDK_CONFIG_DAOS_DIR 00:06:27.931 #define SPDK_CONFIG_DEBUG 1 00:06:27.931 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:27.931 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:27.931 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:27.931 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:27.931 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:27.931 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:27.931 #define SPDK_CONFIG_EXAMPLES 1 00:06:27.931 #undef SPDK_CONFIG_FC 00:06:27.931 #define SPDK_CONFIG_FC_PATH 00:06:27.931 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:27.931 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:27.931 #undef SPDK_CONFIG_FUSE 00:06:27.931 #undef SPDK_CONFIG_FUZZER 00:06:27.931 #define SPDK_CONFIG_FUZZER_LIB 00:06:27.931 #undef SPDK_CONFIG_GOLANG 00:06:27.931 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:27.931 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:27.931 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:27.931 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:27.931 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:27.931 #define SPDK_CONFIG_IDXD 1 00:06:27.931 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:27.931 #undef SPDK_CONFIG_IPSEC_MB 00:06:27.931 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:27.931 #define SPDK_CONFIG_ISAL 1 00:06:27.931 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:27.931 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:27.931 #define SPDK_CONFIG_LIBDIR 00:06:27.931 #undef SPDK_CONFIG_LTO 00:06:27.931 #define SPDK_CONFIG_MAX_LCORES 00:06:27.931 #define SPDK_CONFIG_NVME_CUSE 1 00:06:27.931 #undef SPDK_CONFIG_OCF 00:06:27.931 #define SPDK_CONFIG_OCF_PATH 00:06:27.931 #define SPDK_CONFIG_OPENSSL_PATH 00:06:27.931 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:27.931 #undef SPDK_CONFIG_PGO_USE 00:06:27.931 #define SPDK_CONFIG_PREFIX /usr/local 00:06:27.931 #undef SPDK_CONFIG_RAID5F 00:06:27.931 #undef SPDK_CONFIG_RBD 00:06:27.931 #define SPDK_CONFIG_RDMA 1 00:06:27.931 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:27.931 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:27.931 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:27.931 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:27.931 #define SPDK_CONFIG_SHARED 1 00:06:27.931 #undef SPDK_CONFIG_SMA 00:06:27.931 #define SPDK_CONFIG_TESTS 1 00:06:27.931 #undef SPDK_CONFIG_TSAN 00:06:27.931 #define SPDK_CONFIG_UBLK 1 00:06:27.931 #define SPDK_CONFIG_UBSAN 1 00:06:27.931 #undef SPDK_CONFIG_UNIT_TESTS 00:06:27.931 #undef SPDK_CONFIG_URING 00:06:27.931 #define SPDK_CONFIG_URING_PATH 00:06:27.931 #undef SPDK_CONFIG_URING_ZNS 00:06:27.931 #undef SPDK_CONFIG_USDT 00:06:27.931 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:27.931 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:27.931 #undef SPDK_CONFIG_VFIO_USER 00:06:27.931 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:27.931 #define SPDK_CONFIG_VHOST 1 00:06:27.931 #define SPDK_CONFIG_VIRTIO 1 00:06:27.931 #undef SPDK_CONFIG_VTUNE 00:06:27.931 #define SPDK_CONFIG_VTUNE_DIR 00:06:27.931 #define SPDK_CONFIG_WERROR 1 00:06:27.931 #define SPDK_CONFIG_WPDK_DIR 00:06:27.931 #undef SPDK_CONFIG_XNVME 00:06:27.931 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:27.931 06:46:34 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:27.931 06:46:34 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.931 06:46:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.931 06:46:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.931 06:46:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.931 06:46:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.931 06:46:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.931 06:46:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.931 06:46:34 -- paths/export.sh@5 -- # export PATH 00:06:27.931 06:46:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.931 06:46:34 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:27.931 06:46:34 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:27.931 06:46:34 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:27.931 06:46:34 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:27.931 06:46:34 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:27.931 06:46:34 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:27.931 06:46:34 -- pm/common@16 -- # TEST_TAG=N/A 00:06:27.931 06:46:34 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:27.931 06:46:34 -- common/autotest_common.sh@52 -- # : 1 00:06:27.931 06:46:34 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:27.931 06:46:34 -- common/autotest_common.sh@56 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:27.931 06:46:34 -- common/autotest_common.sh@58 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:27.931 06:46:34 -- common/autotest_common.sh@60 -- # : 1 00:06:27.931 06:46:34 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:27.931 06:46:34 -- common/autotest_common.sh@62 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:27.931 06:46:34 -- common/autotest_common.sh@64 -- # : 00:06:27.931 06:46:34 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:27.931 06:46:34 -- common/autotest_common.sh@66 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:27.931 06:46:34 -- common/autotest_common.sh@68 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:27.931 06:46:34 -- common/autotest_common.sh@70 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:27.931 06:46:34 -- common/autotest_common.sh@72 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:27.931 06:46:34 -- common/autotest_common.sh@74 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:27.931 06:46:34 -- common/autotest_common.sh@76 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:27.931 06:46:34 -- common/autotest_common.sh@78 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:27.931 06:46:34 -- common/autotest_common.sh@80 -- # : 1 00:06:27.931 06:46:34 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:27.931 06:46:34 -- common/autotest_common.sh@82 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:27.931 06:46:34 -- common/autotest_common.sh@84 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:27.931 06:46:34 -- common/autotest_common.sh@86 -- # : 1 00:06:27.931 06:46:34 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:27.931 06:46:34 -- common/autotest_common.sh@88 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:27.931 06:46:34 -- common/autotest_common.sh@90 -- # : 0 00:06:27.931 06:46:34 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:27.931 06:46:34 -- common/autotest_common.sh@92 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:27.932 06:46:34 -- common/autotest_common.sh@94 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:27.932 06:46:34 -- common/autotest_common.sh@96 -- # : tcp 00:06:27.932 06:46:34 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:27.932 06:46:34 -- common/autotest_common.sh@98 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:27.932 06:46:34 -- common/autotest_common.sh@100 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:27.932 06:46:34 -- common/autotest_common.sh@102 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:27.932 06:46:34 -- common/autotest_common.sh@104 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:27.932 06:46:34 -- common/autotest_common.sh@106 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:27.932 06:46:34 -- common/autotest_common.sh@108 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:27.932 06:46:34 -- common/autotest_common.sh@110 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:27.932 06:46:34 -- common/autotest_common.sh@112 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:27.932 06:46:34 -- common/autotest_common.sh@114 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:27.932 06:46:34 -- common/autotest_common.sh@116 -- # : 1 00:06:27.932 06:46:34 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:27.932 06:46:34 -- common/autotest_common.sh@118 -- # : 00:06:27.932 06:46:34 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:27.932 06:46:34 -- common/autotest_common.sh@120 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:27.932 06:46:34 -- common/autotest_common.sh@122 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:27.932 06:46:34 -- common/autotest_common.sh@124 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:27.932 06:46:34 -- common/autotest_common.sh@126 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:27.932 06:46:34 -- common/autotest_common.sh@128 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:27.932 06:46:34 -- common/autotest_common.sh@130 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:27.932 06:46:34 -- common/autotest_common.sh@132 -- # : 00:06:27.932 06:46:34 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:27.932 06:46:34 -- common/autotest_common.sh@134 -- # : true 00:06:27.932 06:46:34 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:27.932 06:46:34 -- common/autotest_common.sh@136 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:27.932 06:46:34 -- common/autotest_common.sh@138 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:27.932 06:46:34 -- common/autotest_common.sh@140 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:27.932 06:46:34 -- common/autotest_common.sh@142 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:27.932 06:46:34 -- common/autotest_common.sh@144 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:27.932 06:46:34 -- common/autotest_common.sh@146 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:27.932 06:46:34 -- common/autotest_common.sh@148 -- # : e810 00:06:27.932 06:46:34 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:27.932 06:46:34 -- common/autotest_common.sh@150 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:27.932 06:46:34 -- common/autotest_common.sh@152 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:27.932 06:46:34 -- common/autotest_common.sh@154 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:27.932 06:46:34 -- common/autotest_common.sh@156 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:27.932 06:46:34 -- common/autotest_common.sh@158 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:27.932 06:46:34 -- common/autotest_common.sh@160 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:06:27.932 06:46:34 -- common/autotest_common.sh@163 -- # : 00:06:27.932 06:46:34 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:06:27.932 06:46:34 -- common/autotest_common.sh@165 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:06:27.932 06:46:34 -- common/autotest_common.sh@167 -- # : 0 00:06:27.932 06:46:34 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:27.932 06:46:34 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:27.932 06:46:34 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:27.932 06:46:34 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:27.932 06:46:34 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:27.932 06:46:34 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:27.932 06:46:34 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:27.932 06:46:34 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:27.932 06:46:34 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:27.932 06:46:34 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:27.932 06:46:34 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:27.932 06:46:34 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:27.932 06:46:34 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:27.932 06:46:34 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:27.932 06:46:34 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:06:27.932 06:46:34 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:27.932 06:46:34 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:27.932 06:46:34 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:27.932 06:46:34 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:27.932 06:46:34 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:27.932 06:46:34 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:06:27.932 06:46:34 -- common/autotest_common.sh@196 -- # cat 00:06:27.932 06:46:34 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:06:27.932 06:46:34 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:27.932 06:46:34 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:27.932 06:46:34 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:27.932 06:46:34 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:27.932 06:46:34 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:06:27.932 06:46:34 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:06:27.932 06:46:34 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:27.932 06:46:34 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:27.932 06:46:34 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:27.932 06:46:34 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:27.932 06:46:34 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:27.932 06:46:34 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:27.932 06:46:34 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:27.932 06:46:34 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:27.932 06:46:34 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:27.932 06:46:34 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:27.932 06:46:34 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:27.932 06:46:34 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:27.932 06:46:34 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:06:27.932 06:46:34 -- common/autotest_common.sh@249 -- # export valgrind= 00:06:27.932 06:46:34 -- common/autotest_common.sh@249 -- # valgrind= 00:06:27.932 06:46:34 -- common/autotest_common.sh@255 -- # uname -s 00:06:27.933 06:46:34 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:06:27.933 06:46:34 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:06:27.933 06:46:34 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:06:27.933 06:46:34 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:06:27.933 06:46:34 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:27.933 06:46:34 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:27.933 06:46:34 -- common/autotest_common.sh@265 -- # MAKE=make 00:06:27.933 06:46:34 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:06:27.933 06:46:34 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:06:27.933 06:46:34 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:06:27.933 06:46:34 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:27.933 06:46:34 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:06:27.933 06:46:34 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:06:27.933 06:46:34 -- common/autotest_common.sh@291 -- # for i in "$@" 00:06:27.933 06:46:34 -- common/autotest_common.sh@292 -- # case "$i" in 00:06:27.933 06:46:34 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:06:27.933 06:46:34 -- common/autotest_common.sh@309 -- # [[ -z 2929014 ]] 00:06:27.933 06:46:34 -- common/autotest_common.sh@309 -- # kill -0 2929014 00:06:27.933 06:46:34 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:06:27.933 06:46:34 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:06:27.933 06:46:34 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:06:27.933 06:46:34 -- common/autotest_common.sh@322 -- # local mount target_dir 00:06:27.933 06:46:34 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:06:27.933 06:46:34 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:06:27.933 06:46:34 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:06:27.933 06:46:34 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:06:27.933 06:46:34 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.ijJ089 00:06:27.933 06:46:34 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:27.933 06:46:34 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:06:27.933 06:46:34 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:06:27.933 06:46:34 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ijJ089/tests/target /tmp/spdk.ijJ089 00:06:27.933 06:46:34 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:06:27.933 06:46:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:27.933 06:46:34 -- common/autotest_common.sh@318 -- # df -T 00:06:27.933 06:46:34 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:06:27.933 06:46:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:06:27.933 06:46:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=972947456 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:06:27.933 06:46:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=4311482368 00:06:27.933 06:46:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=52455489536 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994733568 00:06:27.933 06:46:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=9539244032 00:06:27.933 06:46:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996107264 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997364736 00:06:27.933 06:46:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:06:27.933 06:46:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=12390187008 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398948352 00:06:27.933 06:46:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=8761344 00:06:27.933 06:46:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996107264 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997368832 00:06:27.933 06:46:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=1261568 00:06:27.933 06:46:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:27.933 06:46:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199468032 00:06:27.933 06:46:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199472128 00:06:27.933 06:46:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:06:27.933 06:46:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:27.933 06:46:34 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:06:27.933 * Looking for test storage... 00:06:27.933 06:46:34 -- common/autotest_common.sh@359 -- # local target_space new_size 00:06:27.933 06:46:34 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:06:27.933 06:46:34 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.933 06:46:34 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:27.933 06:46:34 -- common/autotest_common.sh@363 -- # mount=/ 00:06:27.933 06:46:34 -- common/autotest_common.sh@365 -- # target_space=52455489536 00:06:27.933 06:46:34 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:06:27.933 06:46:34 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:06:27.933 06:46:34 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:06:27.933 06:46:34 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:06:27.933 06:46:34 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:06:27.933 06:46:34 -- common/autotest_common.sh@372 -- # new_size=11753836544 00:06:27.933 06:46:34 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:27.933 06:46:34 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.933 06:46:34 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.933 06:46:34 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.933 06:46:34 -- common/autotest_common.sh@380 -- # return 0 00:06:27.933 06:46:34 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:06:27.933 06:46:34 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:06:27.933 06:46:34 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:27.933 06:46:34 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:27.933 06:46:34 -- common/autotest_common.sh@1672 -- # true 00:06:27.933 06:46:34 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:27.933 06:46:34 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:27.933 06:46:34 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:27.933 06:46:34 -- common/autotest_common.sh@27 -- # exec 00:06:27.933 06:46:34 -- common/autotest_common.sh@29 -- # exec 00:06:27.933 06:46:34 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:27.933 06:46:34 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:27.933 06:46:34 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:27.933 06:46:34 -- common/autotest_common.sh@18 -- # set -x 00:06:27.933 06:46:34 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.933 06:46:34 -- nvmf/common.sh@7 -- # uname -s 00:06:27.933 06:46:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.933 06:46:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.933 06:46:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.933 06:46:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.933 06:46:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.933 06:46:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.933 06:46:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.933 06:46:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.933 06:46:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.933 06:46:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.933 06:46:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.933 06:46:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.933 06:46:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.933 06:46:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.933 06:46:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.933 06:46:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.933 06:46:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.933 06:46:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.933 06:46:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.934 06:46:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.934 06:46:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.934 06:46:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.934 06:46:34 -- paths/export.sh@5 -- # export PATH 00:06:27.934 06:46:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.934 06:46:34 -- nvmf/common.sh@46 -- # : 0 00:06:27.934 06:46:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:27.934 06:46:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:27.934 06:46:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:27.934 06:46:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.934 06:46:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.934 06:46:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:27.934 06:46:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:27.934 06:46:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:27.934 06:46:34 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:27.934 06:46:34 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:27.934 06:46:34 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:27.934 06:46:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:27.934 06:46:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.934 06:46:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:27.934 06:46:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:27.934 06:46:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:27.934 06:46:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.934 06:46:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:27.934 06:46:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.934 06:46:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:27.934 06:46:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:27.934 06:46:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:27.934 06:46:34 -- common/autotest_common.sh@10 -- # set +x 00:06:29.834 06:46:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:29.834 06:46:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:29.834 06:46:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:29.834 06:46:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:29.834 06:46:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:29.834 06:46:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:29.834 06:46:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:29.834 06:46:36 -- nvmf/common.sh@294 -- # net_devs=() 00:06:29.834 06:46:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:29.834 06:46:36 -- nvmf/common.sh@295 -- # e810=() 00:06:29.834 06:46:36 -- nvmf/common.sh@295 -- # local -ga e810 00:06:29.834 06:46:36 -- nvmf/common.sh@296 -- # x722=() 00:06:29.834 06:46:36 -- nvmf/common.sh@296 -- # local -ga x722 00:06:29.834 06:46:36 -- nvmf/common.sh@297 -- # mlx=() 00:06:29.834 06:46:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:29.834 06:46:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.834 06:46:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.834 06:46:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.834 06:46:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.834 06:46:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.834 06:46:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.834 06:46:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.834 06:46:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.834 06:46:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.834 06:46:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.834 06:46:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.834 06:46:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:29.834 06:46:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:29.834 06:46:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:29.834 06:46:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:29.834 06:46:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:29.834 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:29.834 06:46:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:29.834 06:46:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:29.834 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:29.834 06:46:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:29.834 06:46:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:29.834 06:46:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.834 06:46:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:29.834 06:46:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.834 06:46:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:29.834 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:29.834 06:46:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.834 06:46:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:29.834 06:46:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.834 06:46:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:29.834 06:46:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.834 06:46:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:29.834 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:29.834 06:46:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.834 06:46:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:29.834 06:46:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:29.834 06:46:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:29.834 06:46:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:29.834 06:46:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.834 06:46:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.834 06:46:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.834 06:46:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:29.834 06:46:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.834 06:46:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.834 06:46:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:29.834 06:46:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.834 06:46:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.834 06:46:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:29.834 06:46:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:29.834 06:46:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.834 06:46:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.834 06:46:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.834 06:46:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.834 06:46:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:29.834 06:46:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.834 06:46:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.834 06:46:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:30.136 06:46:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:30.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:06:30.136 00:06:30.136 --- 10.0.0.2 ping statistics --- 00:06:30.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.136 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:06:30.136 06:46:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:30.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:06:30.136 00:06:30.136 --- 10.0.0.1 ping statistics --- 00:06:30.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.136 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:06:30.136 06:46:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.136 06:46:36 -- nvmf/common.sh@410 -- # return 0 00:06:30.136 06:46:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:30.136 06:46:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.136 06:46:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:30.136 06:46:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:30.136 06:46:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.136 06:46:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:30.136 06:46:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:30.136 06:46:37 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:30.136 06:46:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:30.136 06:46:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.136 06:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:30.136 ************************************ 00:06:30.136 START TEST nvmf_filesystem_no_in_capsule 00:06:30.136 ************************************ 00:06:30.136 06:46:37 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:06:30.136 06:46:37 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:30.136 06:46:37 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:30.136 06:46:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:30.136 06:46:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:30.136 06:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:30.136 06:46:37 -- nvmf/common.sh@469 -- # nvmfpid=2930638 00:06:30.136 06:46:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:30.136 06:46:37 -- nvmf/common.sh@470 -- # waitforlisten 2930638 00:06:30.136 06:46:37 -- common/autotest_common.sh@819 -- # '[' -z 2930638 ']' 00:06:30.136 06:46:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.136 06:46:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.136 06:46:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.136 06:46:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.136 06:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:30.136 [2024-05-12 06:46:37.055070] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:30.136 [2024-05-12 06:46:37.055155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.136 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.136 [2024-05-12 06:46:37.132334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.136 [2024-05-12 06:46:37.258643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.136 [2024-05-12 06:46:37.258800] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.136 [2024-05-12 06:46:37.258821] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.136 [2024-05-12 06:46:37.258835] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.136 [2024-05-12 06:46:37.258913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.136 [2024-05-12 06:46:37.258986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.136 [2024-05-12 06:46:37.259011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.136 [2024-05-12 06:46:37.259016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.074 06:46:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.074 06:46:38 -- common/autotest_common.sh@852 -- # return 0 00:06:31.075 06:46:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:31.075 06:46:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:31.075 06:46:38 -- common/autotest_common.sh@10 -- # set +x 00:06:31.075 06:46:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.075 06:46:38 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:31.075 06:46:38 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:31.075 06:46:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.075 06:46:38 -- common/autotest_common.sh@10 -- # set +x 00:06:31.075 [2024-05-12 06:46:38.116397] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.075 06:46:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.075 06:46:38 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:31.075 06:46:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.075 06:46:38 -- common/autotest_common.sh@10 -- # set +x 00:06:31.333 Malloc1 00:06:31.333 06:46:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.333 06:46:38 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:31.333 06:46:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.333 06:46:38 -- common/autotest_common.sh@10 -- # set +x 00:06:31.333 06:46:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.333 06:46:38 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:31.333 06:46:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.333 06:46:38 -- common/autotest_common.sh@10 -- # set +x 00:06:31.333 06:46:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.333 06:46:38 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:31.333 06:46:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.333 06:46:38 -- common/autotest_common.sh@10 -- # set +x 00:06:31.333 [2024-05-12 06:46:38.306139] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.333 06:46:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.333 06:46:38 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:31.333 06:46:38 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:31.333 06:46:38 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:31.333 06:46:38 -- common/autotest_common.sh@1359 -- # local bs 00:06:31.333 06:46:38 -- common/autotest_common.sh@1360 -- # local nb 00:06:31.333 06:46:38 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:31.333 06:46:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.333 06:46:38 -- common/autotest_common.sh@10 -- # set +x 00:06:31.333 06:46:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.333 06:46:38 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:31.333 { 00:06:31.333 "name": "Malloc1", 00:06:31.333 "aliases": [ 00:06:31.333 "579fde80-e865-4d40-bddf-9305e10400c8" 00:06:31.333 ], 00:06:31.333 "product_name": "Malloc disk", 00:06:31.333 "block_size": 512, 00:06:31.333 "num_blocks": 1048576, 00:06:31.333 "uuid": "579fde80-e865-4d40-bddf-9305e10400c8", 00:06:31.333 "assigned_rate_limits": { 00:06:31.333 "rw_ios_per_sec": 0, 00:06:31.333 "rw_mbytes_per_sec": 0, 00:06:31.333 "r_mbytes_per_sec": 0, 00:06:31.334 "w_mbytes_per_sec": 0 00:06:31.334 }, 00:06:31.334 "claimed": true, 00:06:31.334 "claim_type": "exclusive_write", 00:06:31.334 "zoned": false, 00:06:31.334 "supported_io_types": { 00:06:31.334 "read": true, 00:06:31.334 "write": true, 00:06:31.334 "unmap": true, 00:06:31.334 "write_zeroes": true, 00:06:31.334 "flush": true, 00:06:31.334 "reset": true, 00:06:31.334 "compare": false, 00:06:31.334 "compare_and_write": false, 00:06:31.334 "abort": true, 00:06:31.334 "nvme_admin": false, 00:06:31.334 "nvme_io": false 00:06:31.334 }, 00:06:31.334 "memory_domains": [ 00:06:31.334 { 00:06:31.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:31.334 "dma_device_type": 2 00:06:31.334 } 00:06:31.334 ], 00:06:31.334 "driver_specific": {} 00:06:31.334 } 00:06:31.334 ]' 00:06:31.334 06:46:38 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:31.334 06:46:38 -- common/autotest_common.sh@1362 -- # bs=512 00:06:31.334 06:46:38 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:31.334 06:46:38 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:31.334 06:46:38 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:31.334 06:46:38 -- common/autotest_common.sh@1367 -- # echo 512 00:06:31.334 06:46:38 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:31.334 06:46:38 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:32.272 06:46:39 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:32.272 06:46:39 -- common/autotest_common.sh@1177 -- # local i=0 00:06:32.272 06:46:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:32.272 06:46:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:32.272 06:46:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:34.176 06:46:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:34.176 06:46:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:34.176 06:46:41 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:34.176 06:46:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:34.176 06:46:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:34.176 06:46:41 -- common/autotest_common.sh@1187 -- # return 0 00:06:34.177 06:46:41 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:34.177 06:46:41 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:34.177 06:46:41 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:34.177 06:46:41 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:34.177 06:46:41 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:34.177 06:46:41 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:34.177 06:46:41 -- setup/common.sh@80 -- # echo 536870912 00:06:34.177 06:46:41 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:34.177 06:46:41 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:34.177 06:46:41 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:34.177 06:46:41 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:34.436 06:46:41 -- target/filesystem.sh@69 -- # partprobe 00:06:35.004 06:46:42 -- target/filesystem.sh@70 -- # sleep 1 00:06:35.943 06:46:43 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:35.943 06:46:43 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:35.943 06:46:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:35.943 06:46:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.943 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:35.943 ************************************ 00:06:35.943 START TEST filesystem_ext4 00:06:35.943 ************************************ 00:06:35.943 06:46:43 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:35.943 06:46:43 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:35.943 06:46:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:35.943 06:46:43 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:35.943 06:46:43 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:35.943 06:46:43 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:35.943 06:46:43 -- common/autotest_common.sh@904 -- # local i=0 00:06:35.943 06:46:43 -- common/autotest_common.sh@905 -- # local force 00:06:35.943 06:46:43 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:35.943 06:46:43 -- common/autotest_common.sh@908 -- # force=-F 00:06:35.943 06:46:43 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:35.943 mke2fs 1.46.5 (30-Dec-2021) 00:06:36.202 Discarding device blocks: 0/522240 done 00:06:36.202 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:36.202 Filesystem UUID: 23fbbf25-91bf-4556-961c-3ef2fd9f1b1d 00:06:36.203 Superblock backups stored on blocks: 00:06:36.203 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:36.203 00:06:36.203 Allocating group tables: 0/64 done 00:06:36.203 Writing inode tables: 0/64 done 00:06:39.495 Creating journal (8192 blocks): done 00:06:40.064 Writing superblocks and filesystem accounting information: 0/64 done 00:06:40.064 00:06:40.064 06:46:46 -- common/autotest_common.sh@921 -- # return 0 00:06:40.064 06:46:46 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:41.002 06:46:47 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:41.002 06:46:47 -- target/filesystem.sh@25 -- # sync 00:06:41.002 06:46:47 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:41.002 06:46:47 -- target/filesystem.sh@27 -- # sync 00:06:41.002 06:46:47 -- target/filesystem.sh@29 -- # i=0 00:06:41.002 06:46:47 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:41.002 06:46:47 -- target/filesystem.sh@37 -- # kill -0 2930638 00:06:41.002 06:46:47 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:41.002 06:46:47 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:41.002 06:46:47 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:41.002 06:46:47 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:41.002 00:06:41.002 real 0m4.912s 00:06:41.002 user 0m0.017s 00:06:41.002 sys 0m0.032s 00:06:41.002 06:46:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.002 06:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:41.002 ************************************ 00:06:41.002 END TEST filesystem_ext4 00:06:41.002 ************************************ 00:06:41.002 06:46:47 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:41.002 06:46:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:41.002 06:46:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.002 06:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:41.002 ************************************ 00:06:41.002 START TEST filesystem_btrfs 00:06:41.002 ************************************ 00:06:41.002 06:46:47 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:41.002 06:46:47 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:41.002 06:46:47 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.002 06:46:47 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:41.002 06:46:47 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:41.002 06:46:47 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:41.002 06:46:47 -- common/autotest_common.sh@904 -- # local i=0 00:06:41.002 06:46:47 -- common/autotest_common.sh@905 -- # local force 00:06:41.002 06:46:47 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:41.002 06:46:47 -- common/autotest_common.sh@910 -- # force=-f 00:06:41.002 06:46:47 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:41.262 btrfs-progs v6.6.2 00:06:41.262 See https://btrfs.readthedocs.io for more information. 00:06:41.262 00:06:41.262 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:41.262 NOTE: several default settings have changed in version 5.15, please make sure 00:06:41.262 this does not affect your deployments: 00:06:41.262 - DUP for metadata (-m dup) 00:06:41.262 - enabled no-holes (-O no-holes) 00:06:41.262 - enabled free-space-tree (-R free-space-tree) 00:06:41.262 00:06:41.262 Label: (null) 00:06:41.262 UUID: b083a874-08cc-4160-ba19-c24ab584085a 00:06:41.262 Node size: 16384 00:06:41.262 Sector size: 4096 00:06:41.262 Filesystem size: 510.00MiB 00:06:41.262 Block group profiles: 00:06:41.262 Data: single 8.00MiB 00:06:41.262 Metadata: DUP 32.00MiB 00:06:41.262 System: DUP 8.00MiB 00:06:41.262 SSD detected: yes 00:06:41.262 Zoned device: no 00:06:41.262 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:41.262 Runtime features: free-space-tree 00:06:41.262 Checksum: crc32c 00:06:41.262 Number of devices: 1 00:06:41.262 Devices: 00:06:41.262 ID SIZE PATH 00:06:41.262 1 510.00MiB /dev/nvme0n1p1 00:06:41.262 00:06:41.262 06:46:48 -- common/autotest_common.sh@921 -- # return 0 00:06:41.262 06:46:48 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:42.199 06:46:49 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:42.199 06:46:49 -- target/filesystem.sh@25 -- # sync 00:06:42.199 06:46:49 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:42.199 06:46:49 -- target/filesystem.sh@27 -- # sync 00:06:42.199 06:46:49 -- target/filesystem.sh@29 -- # i=0 00:06:42.199 06:46:49 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:42.199 06:46:49 -- target/filesystem.sh@37 -- # kill -0 2930638 00:06:42.199 06:46:49 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:42.199 06:46:49 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:42.199 06:46:49 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:42.199 06:46:49 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:42.199 00:06:42.199 real 0m1.173s 00:06:42.199 user 0m0.015s 00:06:42.199 sys 0m0.038s 00:06:42.199 06:46:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.199 06:46:49 -- common/autotest_common.sh@10 -- # set +x 00:06:42.199 ************************************ 00:06:42.199 END TEST filesystem_btrfs 00:06:42.199 ************************************ 00:06:42.199 06:46:49 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:42.199 06:46:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:42.199 06:46:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.199 06:46:49 -- common/autotest_common.sh@10 -- # set +x 00:06:42.199 ************************************ 00:06:42.199 START TEST filesystem_xfs 00:06:42.199 ************************************ 00:06:42.199 06:46:49 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:06:42.199 06:46:49 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:42.199 06:46:49 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:42.199 06:46:49 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:42.199 06:46:49 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:06:42.199 06:46:49 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:42.199 06:46:49 -- common/autotest_common.sh@904 -- # local i=0 00:06:42.199 06:46:49 -- common/autotest_common.sh@905 -- # local force 00:06:42.199 06:46:49 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:06:42.199 06:46:49 -- common/autotest_common.sh@910 -- # force=-f 00:06:42.199 06:46:49 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:42.199 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:42.199 = sectsz=512 attr=2, projid32bit=1 00:06:42.199 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:42.199 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:42.199 data = bsize=4096 blocks=130560, imaxpct=25 00:06:42.199 = sunit=0 swidth=0 blks 00:06:42.199 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:42.199 log =internal log bsize=4096 blocks=16384, version=2 00:06:42.199 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:42.199 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:43.136 Discarding blocks...Done. 00:06:43.137 06:46:50 -- common/autotest_common.sh@921 -- # return 0 00:06:43.137 06:46:50 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:45.672 06:46:52 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:45.672 06:46:52 -- target/filesystem.sh@25 -- # sync 00:06:45.672 06:46:52 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:45.672 06:46:52 -- target/filesystem.sh@27 -- # sync 00:06:45.672 06:46:52 -- target/filesystem.sh@29 -- # i=0 00:06:45.672 06:46:52 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:45.672 06:46:52 -- target/filesystem.sh@37 -- # kill -0 2930638 00:06:45.672 06:46:52 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:45.672 06:46:52 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:45.672 06:46:52 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:45.672 06:46:52 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:45.672 00:06:45.672 real 0m3.312s 00:06:45.672 user 0m0.014s 00:06:45.672 sys 0m0.046s 00:06:45.672 06:46:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.672 06:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:45.672 ************************************ 00:06:45.672 END TEST filesystem_xfs 00:06:45.672 ************************************ 00:06:45.672 06:46:52 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:45.672 06:46:52 -- target/filesystem.sh@93 -- # sync 00:06:45.672 06:46:52 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:45.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:45.672 06:46:52 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:45.672 06:46:52 -- common/autotest_common.sh@1198 -- # local i=0 00:06:45.672 06:46:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:06:45.672 06:46:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:45.672 06:46:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:45.672 06:46:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:45.672 06:46:52 -- common/autotest_common.sh@1210 -- # return 0 00:06:45.672 06:46:52 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:45.672 06:46:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:45.672 06:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:45.672 06:46:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:45.672 06:46:52 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:45.672 06:46:52 -- target/filesystem.sh@101 -- # killprocess 2930638 00:06:45.672 06:46:52 -- common/autotest_common.sh@926 -- # '[' -z 2930638 ']' 00:06:45.672 06:46:52 -- common/autotest_common.sh@930 -- # kill -0 2930638 00:06:45.672 06:46:52 -- common/autotest_common.sh@931 -- # uname 00:06:45.672 06:46:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:45.672 06:46:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2930638 00:06:45.672 06:46:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:45.672 06:46:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:45.672 06:46:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2930638' 00:06:45.672 killing process with pid 2930638 00:06:45.672 06:46:52 -- common/autotest_common.sh@945 -- # kill 2930638 00:06:45.672 06:46:52 -- common/autotest_common.sh@950 -- # wait 2930638 00:06:46.242 06:46:53 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:46.242 00:06:46.242 real 0m16.100s 00:06:46.242 user 1m2.045s 00:06:46.242 sys 0m1.972s 00:06:46.242 06:46:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.242 06:46:53 -- common/autotest_common.sh@10 -- # set +x 00:06:46.242 ************************************ 00:06:46.242 END TEST nvmf_filesystem_no_in_capsule 00:06:46.242 ************************************ 00:06:46.242 06:46:53 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:46.242 06:46:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:46.242 06:46:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.242 06:46:53 -- common/autotest_common.sh@10 -- # set +x 00:06:46.242 ************************************ 00:06:46.242 START TEST nvmf_filesystem_in_capsule 00:06:46.242 ************************************ 00:06:46.242 06:46:53 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:06:46.242 06:46:53 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:46.242 06:46:53 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:46.242 06:46:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:46.242 06:46:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:46.242 06:46:53 -- common/autotest_common.sh@10 -- # set +x 00:06:46.242 06:46:53 -- nvmf/common.sh@469 -- # nvmfpid=2932813 00:06:46.242 06:46:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:46.242 06:46:53 -- nvmf/common.sh@470 -- # waitforlisten 2932813 00:06:46.242 06:46:53 -- common/autotest_common.sh@819 -- # '[' -z 2932813 ']' 00:06:46.242 06:46:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.242 06:46:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.242 06:46:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.242 06:46:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.242 06:46:53 -- common/autotest_common.sh@10 -- # set +x 00:06:46.242 [2024-05-12 06:46:53.187348] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:46.242 [2024-05-12 06:46:53.187436] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.242 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.242 [2024-05-12 06:46:53.258462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.242 [2024-05-12 06:46:53.369287] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.242 [2024-05-12 06:46:53.369427] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.242 [2024-05-12 06:46:53.369445] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.242 [2024-05-12 06:46:53.369458] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.242 [2024-05-12 06:46:53.369526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.242 [2024-05-12 06:46:53.369562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.242 [2024-05-12 06:46:53.369619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.242 [2024-05-12 06:46:53.369622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.179 06:46:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:47.179 06:46:54 -- common/autotest_common.sh@852 -- # return 0 00:06:47.179 06:46:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:47.179 06:46:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:47.179 06:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:47.179 06:46:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.179 06:46:54 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:47.179 06:46:54 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:47.179 06:46:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.179 06:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:47.179 [2024-05-12 06:46:54.178291] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.179 06:46:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.179 06:46:54 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:47.179 06:46:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.179 06:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:47.439 Malloc1 00:06:47.439 06:46:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.439 06:46:54 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:47.439 06:46:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.439 06:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:47.439 06:46:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.439 06:46:54 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:47.439 06:46:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.439 06:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:47.439 06:46:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.439 06:46:54 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:47.439 06:46:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.439 06:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:47.439 [2024-05-12 06:46:54.363358] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.439 06:46:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.439 06:46:54 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:47.439 06:46:54 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:47.439 06:46:54 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:47.439 06:46:54 -- common/autotest_common.sh@1359 -- # local bs 00:06:47.439 06:46:54 -- common/autotest_common.sh@1360 -- # local nb 00:06:47.439 06:46:54 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:47.439 06:46:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.439 06:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:47.439 06:46:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.439 06:46:54 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:47.439 { 00:06:47.439 "name": "Malloc1", 00:06:47.439 "aliases": [ 00:06:47.439 "27485f8f-8f03-47ae-9c9a-a08d17edbf75" 00:06:47.439 ], 00:06:47.439 "product_name": "Malloc disk", 00:06:47.439 "block_size": 512, 00:06:47.439 "num_blocks": 1048576, 00:06:47.439 "uuid": "27485f8f-8f03-47ae-9c9a-a08d17edbf75", 00:06:47.439 "assigned_rate_limits": { 00:06:47.439 "rw_ios_per_sec": 0, 00:06:47.439 "rw_mbytes_per_sec": 0, 00:06:47.439 "r_mbytes_per_sec": 0, 00:06:47.439 "w_mbytes_per_sec": 0 00:06:47.439 }, 00:06:47.439 "claimed": true, 00:06:47.439 "claim_type": "exclusive_write", 00:06:47.439 "zoned": false, 00:06:47.439 "supported_io_types": { 00:06:47.439 "read": true, 00:06:47.439 "write": true, 00:06:47.439 "unmap": true, 00:06:47.439 "write_zeroes": true, 00:06:47.439 "flush": true, 00:06:47.439 "reset": true, 00:06:47.439 "compare": false, 00:06:47.439 "compare_and_write": false, 00:06:47.439 "abort": true, 00:06:47.439 "nvme_admin": false, 00:06:47.439 "nvme_io": false 00:06:47.439 }, 00:06:47.439 "memory_domains": [ 00:06:47.439 { 00:06:47.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.439 "dma_device_type": 2 00:06:47.439 } 00:06:47.439 ], 00:06:47.439 "driver_specific": {} 00:06:47.439 } 00:06:47.439 ]' 00:06:47.439 06:46:54 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:47.439 06:46:54 -- common/autotest_common.sh@1362 -- # bs=512 00:06:47.439 06:46:54 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:47.439 06:46:54 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:47.439 06:46:54 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:47.439 06:46:54 -- common/autotest_common.sh@1367 -- # echo 512 00:06:47.439 06:46:54 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:47.439 06:46:54 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:48.006 06:46:55 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:48.006 06:46:55 -- common/autotest_common.sh@1177 -- # local i=0 00:06:48.006 06:46:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:48.006 06:46:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:48.006 06:46:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:50.599 06:46:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:50.599 06:46:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:50.599 06:46:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:50.599 06:46:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:50.599 06:46:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:50.599 06:46:57 -- common/autotest_common.sh@1187 -- # return 0 00:06:50.599 06:46:57 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:50.599 06:46:57 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:50.599 06:46:57 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:50.599 06:46:57 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:50.599 06:46:57 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:50.599 06:46:57 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:50.599 06:46:57 -- setup/common.sh@80 -- # echo 536870912 00:06:50.599 06:46:57 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:50.599 06:46:57 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:50.599 06:46:57 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:50.599 06:46:57 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:50.599 06:46:57 -- target/filesystem.sh@69 -- # partprobe 00:06:50.599 06:46:57 -- target/filesystem.sh@70 -- # sleep 1 00:06:51.535 06:46:58 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:51.535 06:46:58 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:51.535 06:46:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:51.535 06:46:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.535 06:46:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.535 ************************************ 00:06:51.535 START TEST filesystem_in_capsule_ext4 00:06:51.535 ************************************ 00:06:51.535 06:46:58 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:51.535 06:46:58 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:51.792 06:46:58 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:51.792 06:46:58 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:51.792 06:46:58 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:51.792 06:46:58 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:51.792 06:46:58 -- common/autotest_common.sh@904 -- # local i=0 00:06:51.792 06:46:58 -- common/autotest_common.sh@905 -- # local force 00:06:51.792 06:46:58 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:51.792 06:46:58 -- common/autotest_common.sh@908 -- # force=-F 00:06:51.792 06:46:58 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:51.792 mke2fs 1.46.5 (30-Dec-2021) 00:06:51.792 Discarding device blocks: 0/522240 done 00:06:51.792 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:51.792 Filesystem UUID: 0d568adb-b48b-442b-a584-8dc780c3e6d9 00:06:51.792 Superblock backups stored on blocks: 00:06:51.792 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:51.792 00:06:51.792 Allocating group tables: 0/64 done 00:06:51.792 Writing inode tables: 0/64 done 00:06:55.084 Creating journal (8192 blocks): done 00:06:55.647 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:06:55.647 00:06:55.647 06:47:02 -- common/autotest_common.sh@921 -- # return 0 00:06:55.647 06:47:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:55.647 06:47:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:55.648 06:47:02 -- target/filesystem.sh@25 -- # sync 00:06:55.648 06:47:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:55.648 06:47:02 -- target/filesystem.sh@27 -- # sync 00:06:55.648 06:47:02 -- target/filesystem.sh@29 -- # i=0 00:06:55.648 06:47:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:55.648 06:47:02 -- target/filesystem.sh@37 -- # kill -0 2932813 00:06:55.648 06:47:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:55.648 06:47:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:55.648 06:47:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:55.648 06:47:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:55.648 00:06:55.648 real 0m4.108s 00:06:55.648 user 0m0.018s 00:06:55.648 sys 0m0.037s 00:06:55.648 06:47:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.648 06:47:02 -- common/autotest_common.sh@10 -- # set +x 00:06:55.648 ************************************ 00:06:55.648 END TEST filesystem_in_capsule_ext4 00:06:55.648 ************************************ 00:06:55.906 06:47:02 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:55.906 06:47:02 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:55.906 06:47:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.906 06:47:02 -- common/autotest_common.sh@10 -- # set +x 00:06:55.906 ************************************ 00:06:55.906 START TEST filesystem_in_capsule_btrfs 00:06:55.906 ************************************ 00:06:55.906 06:47:02 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:55.906 06:47:02 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:55.906 06:47:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:55.906 06:47:02 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:55.906 06:47:02 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:55.906 06:47:02 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:55.906 06:47:02 -- common/autotest_common.sh@904 -- # local i=0 00:06:55.906 06:47:02 -- common/autotest_common.sh@905 -- # local force 00:06:55.906 06:47:02 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:55.906 06:47:02 -- common/autotest_common.sh@910 -- # force=-f 00:06:55.906 06:47:02 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:56.164 btrfs-progs v6.6.2 00:06:56.164 See https://btrfs.readthedocs.io for more information. 00:06:56.164 00:06:56.164 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:56.164 NOTE: several default settings have changed in version 5.15, please make sure 00:06:56.165 this does not affect your deployments: 00:06:56.165 - DUP for metadata (-m dup) 00:06:56.165 - enabled no-holes (-O no-holes) 00:06:56.165 - enabled free-space-tree (-R free-space-tree) 00:06:56.165 00:06:56.165 Label: (null) 00:06:56.165 UUID: 7c120259-ea29-4017-8f79-5c288e83516f 00:06:56.165 Node size: 16384 00:06:56.165 Sector size: 4096 00:06:56.165 Filesystem size: 510.00MiB 00:06:56.165 Block group profiles: 00:06:56.165 Data: single 8.00MiB 00:06:56.165 Metadata: DUP 32.00MiB 00:06:56.165 System: DUP 8.00MiB 00:06:56.165 SSD detected: yes 00:06:56.165 Zoned device: no 00:06:56.165 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:56.165 Runtime features: free-space-tree 00:06:56.165 Checksum: crc32c 00:06:56.165 Number of devices: 1 00:06:56.165 Devices: 00:06:56.165 ID SIZE PATH 00:06:56.165 1 510.00MiB /dev/nvme0n1p1 00:06:56.165 00:06:56.165 06:47:03 -- common/autotest_common.sh@921 -- # return 0 00:06:56.165 06:47:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:56.730 06:47:03 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:56.730 06:47:03 -- target/filesystem.sh@25 -- # sync 00:06:56.730 06:47:03 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:56.730 06:47:03 -- target/filesystem.sh@27 -- # sync 00:06:56.730 06:47:03 -- target/filesystem.sh@29 -- # i=0 00:06:56.730 06:47:03 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:56.730 06:47:03 -- target/filesystem.sh@37 -- # kill -0 2932813 00:06:56.730 06:47:03 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:56.730 06:47:03 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:56.730 06:47:03 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:56.730 06:47:03 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:56.730 00:06:56.730 real 0m1.018s 00:06:56.730 user 0m0.016s 00:06:56.730 sys 0m0.045s 00:06:56.730 06:47:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.730 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:56.730 ************************************ 00:06:56.730 END TEST filesystem_in_capsule_btrfs 00:06:56.730 ************************************ 00:06:56.730 06:47:03 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:56.730 06:47:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:56.730 06:47:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.730 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:56.730 ************************************ 00:06:56.730 START TEST filesystem_in_capsule_xfs 00:06:56.730 ************************************ 00:06:56.730 06:47:03 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:06:56.730 06:47:03 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:56.730 06:47:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:56.730 06:47:03 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:56.730 06:47:03 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:06:56.730 06:47:03 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:56.730 06:47:03 -- common/autotest_common.sh@904 -- # local i=0 00:06:56.730 06:47:03 -- common/autotest_common.sh@905 -- # local force 00:06:56.730 06:47:03 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:06:56.730 06:47:03 -- common/autotest_common.sh@910 -- # force=-f 00:06:56.730 06:47:03 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:56.989 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:56.989 = sectsz=512 attr=2, projid32bit=1 00:06:56.989 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:56.989 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:56.989 data = bsize=4096 blocks=130560, imaxpct=25 00:06:56.989 = sunit=0 swidth=0 blks 00:06:56.989 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:56.989 log =internal log bsize=4096 blocks=16384, version=2 00:06:56.989 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:56.989 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:57.929 Discarding blocks...Done. 00:06:57.929 06:47:04 -- common/autotest_common.sh@921 -- # return 0 00:06:57.929 06:47:04 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:59.835 06:47:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:59.836 06:47:06 -- target/filesystem.sh@25 -- # sync 00:06:59.836 06:47:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:59.836 06:47:06 -- target/filesystem.sh@27 -- # sync 00:06:59.836 06:47:06 -- target/filesystem.sh@29 -- # i=0 00:06:59.836 06:47:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:59.836 06:47:06 -- target/filesystem.sh@37 -- # kill -0 2932813 00:06:59.836 06:47:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:59.836 06:47:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:59.836 06:47:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:59.836 06:47:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:59.836 00:06:59.836 real 0m2.735s 00:06:59.836 user 0m0.012s 00:06:59.836 sys 0m0.044s 00:06:59.836 06:47:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.836 06:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:59.836 ************************************ 00:06:59.836 END TEST filesystem_in_capsule_xfs 00:06:59.836 ************************************ 00:06:59.836 06:47:06 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:59.836 06:47:06 -- target/filesystem.sh@93 -- # sync 00:06:59.836 06:47:06 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:59.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.836 06:47:06 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:59.836 06:47:06 -- common/autotest_common.sh@1198 -- # local i=0 00:06:59.836 06:47:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:06:59.836 06:47:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:59.836 06:47:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:59.836 06:47:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:59.836 06:47:06 -- common/autotest_common.sh@1210 -- # return 0 00:06:59.836 06:47:06 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:59.836 06:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.836 06:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:59.836 06:47:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.836 06:47:06 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:59.836 06:47:06 -- target/filesystem.sh@101 -- # killprocess 2932813 00:06:59.836 06:47:06 -- common/autotest_common.sh@926 -- # '[' -z 2932813 ']' 00:06:59.836 06:47:06 -- common/autotest_common.sh@930 -- # kill -0 2932813 00:06:59.836 06:47:06 -- common/autotest_common.sh@931 -- # uname 00:06:59.836 06:47:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:59.836 06:47:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2932813 00:06:59.836 06:47:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:59.836 06:47:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:59.836 06:47:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2932813' 00:06:59.836 killing process with pid 2932813 00:06:59.836 06:47:06 -- common/autotest_common.sh@945 -- # kill 2932813 00:06:59.836 06:47:06 -- common/autotest_common.sh@950 -- # wait 2932813 00:07:00.406 06:47:07 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:00.406 00:07:00.406 real 0m14.101s 00:07:00.406 user 0m54.324s 00:07:00.406 sys 0m1.735s 00:07:00.406 06:47:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.406 06:47:07 -- common/autotest_common.sh@10 -- # set +x 00:07:00.406 ************************************ 00:07:00.406 END TEST nvmf_filesystem_in_capsule 00:07:00.406 ************************************ 00:07:00.406 06:47:07 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:00.406 06:47:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:00.406 06:47:07 -- nvmf/common.sh@116 -- # sync 00:07:00.406 06:47:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:00.406 06:47:07 -- nvmf/common.sh@119 -- # set +e 00:07:00.406 06:47:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:00.406 06:47:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:00.406 rmmod nvme_tcp 00:07:00.406 rmmod nvme_fabrics 00:07:00.406 rmmod nvme_keyring 00:07:00.406 06:47:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:00.406 06:47:07 -- nvmf/common.sh@123 -- # set -e 00:07:00.406 06:47:07 -- nvmf/common.sh@124 -- # return 0 00:07:00.406 06:47:07 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:00.406 06:47:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:00.406 06:47:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:00.406 06:47:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:00.406 06:47:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.406 06:47:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:00.406 06:47:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.406 06:47:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.406 06:47:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.313 06:47:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:02.313 00:07:02.313 real 0m34.541s 00:07:02.313 user 1m57.157s 00:07:02.313 sys 0m5.260s 00:07:02.313 06:47:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.313 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:07:02.313 ************************************ 00:07:02.313 END TEST nvmf_filesystem 00:07:02.313 ************************************ 00:07:02.313 06:47:09 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:02.313 06:47:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:02.313 06:47:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.313 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:07:02.313 ************************************ 00:07:02.313 START TEST nvmf_discovery 00:07:02.313 ************************************ 00:07:02.313 06:47:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:02.313 * Looking for test storage... 00:07:02.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.313 06:47:09 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.313 06:47:09 -- nvmf/common.sh@7 -- # uname -s 00:07:02.313 06:47:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.313 06:47:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.313 06:47:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.313 06:47:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.313 06:47:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.313 06:47:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.313 06:47:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.313 06:47:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.313 06:47:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.313 06:47:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.313 06:47:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.313 06:47:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.313 06:47:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.313 06:47:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.313 06:47:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.313 06:47:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.313 06:47:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.313 06:47:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.313 06:47:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.313 06:47:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.313 06:47:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.313 06:47:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.313 06:47:09 -- paths/export.sh@5 -- # export PATH 00:07:02.313 06:47:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.313 06:47:09 -- nvmf/common.sh@46 -- # : 0 00:07:02.572 06:47:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:02.572 06:47:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:02.572 06:47:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:02.572 06:47:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.572 06:47:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.572 06:47:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:02.572 06:47:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:02.572 06:47:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:02.572 06:47:09 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:02.572 06:47:09 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:02.572 06:47:09 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:02.572 06:47:09 -- target/discovery.sh@15 -- # hash nvme 00:07:02.572 06:47:09 -- target/discovery.sh@20 -- # nvmftestinit 00:07:02.572 06:47:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:02.572 06:47:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.572 06:47:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:02.572 06:47:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:02.572 06:47:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:02.572 06:47:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.572 06:47:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.572 06:47:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.572 06:47:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:02.572 06:47:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:02.572 06:47:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:02.572 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:07:04.478 06:47:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:04.478 06:47:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:04.478 06:47:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:04.478 06:47:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:04.478 06:47:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:04.478 06:47:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:04.478 06:47:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:04.478 06:47:11 -- nvmf/common.sh@294 -- # net_devs=() 00:07:04.478 06:47:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:04.478 06:47:11 -- nvmf/common.sh@295 -- # e810=() 00:07:04.478 06:47:11 -- nvmf/common.sh@295 -- # local -ga e810 00:07:04.478 06:47:11 -- nvmf/common.sh@296 -- # x722=() 00:07:04.478 06:47:11 -- nvmf/common.sh@296 -- # local -ga x722 00:07:04.478 06:47:11 -- nvmf/common.sh@297 -- # mlx=() 00:07:04.478 06:47:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:04.478 06:47:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.478 06:47:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.478 06:47:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.478 06:47:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.478 06:47:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.478 06:47:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.478 06:47:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.478 06:47:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.478 06:47:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.478 06:47:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.478 06:47:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.478 06:47:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:04.478 06:47:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:04.478 06:47:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:04.478 06:47:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:04.478 06:47:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:04.478 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:04.478 06:47:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:04.478 06:47:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:04.478 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:04.478 06:47:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:04.478 06:47:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:04.478 06:47:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:04.478 06:47:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.478 06:47:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:04.478 06:47:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.478 06:47:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:04.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:04.478 06:47:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.478 06:47:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:04.478 06:47:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.478 06:47:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:04.478 06:47:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.479 06:47:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:04.479 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:04.479 06:47:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.479 06:47:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:04.479 06:47:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:04.479 06:47:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:04.479 06:47:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:04.479 06:47:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:04.479 06:47:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.479 06:47:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.479 06:47:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.479 06:47:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:04.479 06:47:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.479 06:47:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.479 06:47:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:04.479 06:47:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.479 06:47:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.479 06:47:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:04.479 06:47:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:04.479 06:47:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.479 06:47:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.479 06:47:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.479 06:47:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.479 06:47:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:04.479 06:47:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.479 06:47:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.479 06:47:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.479 06:47:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:04.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:07:04.479 00:07:04.479 --- 10.0.0.2 ping statistics --- 00:07:04.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.479 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:04.479 06:47:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:07:04.479 00:07:04.479 --- 10.0.0.1 ping statistics --- 00:07:04.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.479 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:04.479 06:47:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.479 06:47:11 -- nvmf/common.sh@410 -- # return 0 00:07:04.479 06:47:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:04.479 06:47:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.479 06:47:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:04.479 06:47:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:04.479 06:47:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.479 06:47:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:04.479 06:47:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:04.479 06:47:11 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:04.479 06:47:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:04.479 06:47:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:04.479 06:47:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.479 06:47:11 -- nvmf/common.sh@469 -- # nvmfpid=2936739 00:07:04.479 06:47:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:04.479 06:47:11 -- nvmf/common.sh@470 -- # waitforlisten 2936739 00:07:04.479 06:47:11 -- common/autotest_common.sh@819 -- # '[' -z 2936739 ']' 00:07:04.479 06:47:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.479 06:47:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:04.479 06:47:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.479 06:47:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:04.479 06:47:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.739 [2024-05-12 06:47:11.610377] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:04.739 [2024-05-12 06:47:11.610468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.739 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.739 [2024-05-12 06:47:11.680823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.739 [2024-05-12 06:47:11.800522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:04.739 [2024-05-12 06:47:11.800700] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.739 [2024-05-12 06:47:11.800721] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.739 [2024-05-12 06:47:11.800757] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.739 [2024-05-12 06:47:11.800820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.739 [2024-05-12 06:47:11.800890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.739 [2024-05-12 06:47:11.800940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.739 [2024-05-12 06:47:11.800943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.677 06:47:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:05.677 06:47:12 -- common/autotest_common.sh@852 -- # return 0 00:07:05.677 06:47:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:05.677 06:47:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.677 06:47:12 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 [2024-05-12 06:47:12.605224] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@26 -- # seq 1 4 00:07:05.677 06:47:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:05.677 06:47:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 Null1 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 [2024-05-12 06:47:12.645478] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:05.677 06:47:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 Null2 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:05.677 06:47:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 Null3 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:05.677 06:47:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 Null4 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.677 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.677 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.677 06:47:12 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:05.677 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.678 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.678 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.678 06:47:12 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:05.937 00:07:05.937 Discovery Log Number of Records 6, Generation counter 6 00:07:05.937 =====Discovery Log Entry 0====== 00:07:05.937 trtype: tcp 00:07:05.937 adrfam: ipv4 00:07:05.937 subtype: current discovery subsystem 00:07:05.937 treq: not required 00:07:05.937 portid: 0 00:07:05.937 trsvcid: 4420 00:07:05.937 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:05.937 traddr: 10.0.0.2 00:07:05.937 eflags: explicit discovery connections, duplicate discovery information 00:07:05.937 sectype: none 00:07:05.937 =====Discovery Log Entry 1====== 00:07:05.937 trtype: tcp 00:07:05.937 adrfam: ipv4 00:07:05.937 subtype: nvme subsystem 00:07:05.937 treq: not required 00:07:05.937 portid: 0 00:07:05.937 trsvcid: 4420 00:07:05.937 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:05.937 traddr: 10.0.0.2 00:07:05.937 eflags: none 00:07:05.937 sectype: none 00:07:05.937 =====Discovery Log Entry 2====== 00:07:05.937 trtype: tcp 00:07:05.937 adrfam: ipv4 00:07:05.937 subtype: nvme subsystem 00:07:05.937 treq: not required 00:07:05.937 portid: 0 00:07:05.937 trsvcid: 4420 00:07:05.937 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:05.937 traddr: 10.0.0.2 00:07:05.937 eflags: none 00:07:05.937 sectype: none 00:07:05.937 =====Discovery Log Entry 3====== 00:07:05.937 trtype: tcp 00:07:05.937 adrfam: ipv4 00:07:05.937 subtype: nvme subsystem 00:07:05.937 treq: not required 00:07:05.937 portid: 0 00:07:05.937 trsvcid: 4420 00:07:05.937 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:05.937 traddr: 10.0.0.2 00:07:05.937 eflags: none 00:07:05.937 sectype: none 00:07:05.937 =====Discovery Log Entry 4====== 00:07:05.937 trtype: tcp 00:07:05.937 adrfam: ipv4 00:07:05.937 subtype: nvme subsystem 00:07:05.937 treq: not required 00:07:05.937 portid: 0 00:07:05.937 trsvcid: 4420 00:07:05.937 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:05.937 traddr: 10.0.0.2 00:07:05.937 eflags: none 00:07:05.937 sectype: none 00:07:05.937 =====Discovery Log Entry 5====== 00:07:05.937 trtype: tcp 00:07:05.937 adrfam: ipv4 00:07:05.937 subtype: discovery subsystem referral 00:07:05.937 treq: not required 00:07:05.937 portid: 0 00:07:05.937 trsvcid: 4430 00:07:05.937 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:05.937 traddr: 10.0.0.2 00:07:05.937 eflags: none 00:07:05.937 sectype: none 00:07:05.937 06:47:12 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:05.937 Perform nvmf subsystem discovery via RPC 00:07:05.937 06:47:12 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:05.937 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.937 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.937 [2024-05-12 06:47:12.825930] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:05.937 [ 00:07:05.937 { 00:07:05.937 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:05.937 "subtype": "Discovery", 00:07:05.937 "listen_addresses": [ 00:07:05.937 { 00:07:05.937 "transport": "TCP", 00:07:05.937 "trtype": "TCP", 00:07:05.937 "adrfam": "IPv4", 00:07:05.937 "traddr": "10.0.0.2", 00:07:05.937 "trsvcid": "4420" 00:07:05.937 } 00:07:05.937 ], 00:07:05.937 "allow_any_host": true, 00:07:05.937 "hosts": [] 00:07:05.937 }, 00:07:05.937 { 00:07:05.938 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:05.938 "subtype": "NVMe", 00:07:05.938 "listen_addresses": [ 00:07:05.938 { 00:07:05.938 "transport": "TCP", 00:07:05.938 "trtype": "TCP", 00:07:05.938 "adrfam": "IPv4", 00:07:05.938 "traddr": "10.0.0.2", 00:07:05.938 "trsvcid": "4420" 00:07:05.938 } 00:07:05.938 ], 00:07:05.938 "allow_any_host": true, 00:07:05.938 "hosts": [], 00:07:05.938 "serial_number": "SPDK00000000000001", 00:07:05.938 "model_number": "SPDK bdev Controller", 00:07:05.938 "max_namespaces": 32, 00:07:05.938 "min_cntlid": 1, 00:07:05.938 "max_cntlid": 65519, 00:07:05.938 "namespaces": [ 00:07:05.938 { 00:07:05.938 "nsid": 1, 00:07:05.938 "bdev_name": "Null1", 00:07:05.938 "name": "Null1", 00:07:05.938 "nguid": "1863EB4E60114F44B13A6BDD515BA885", 00:07:05.938 "uuid": "1863eb4e-6011-4f44-b13a-6bdd515ba885" 00:07:05.938 } 00:07:05.938 ] 00:07:05.938 }, 00:07:05.938 { 00:07:05.938 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:05.938 "subtype": "NVMe", 00:07:05.938 "listen_addresses": [ 00:07:05.938 { 00:07:05.938 "transport": "TCP", 00:07:05.938 "trtype": "TCP", 00:07:05.938 "adrfam": "IPv4", 00:07:05.938 "traddr": "10.0.0.2", 00:07:05.938 "trsvcid": "4420" 00:07:05.938 } 00:07:05.938 ], 00:07:05.938 "allow_any_host": true, 00:07:05.938 "hosts": [], 00:07:05.938 "serial_number": "SPDK00000000000002", 00:07:05.938 "model_number": "SPDK bdev Controller", 00:07:05.938 "max_namespaces": 32, 00:07:05.938 "min_cntlid": 1, 00:07:05.938 "max_cntlid": 65519, 00:07:05.938 "namespaces": [ 00:07:05.938 { 00:07:05.938 "nsid": 1, 00:07:05.938 "bdev_name": "Null2", 00:07:05.938 "name": "Null2", 00:07:05.938 "nguid": "1685C3EBD5224C8BA1FEBDC0AA00DEEA", 00:07:05.938 "uuid": "1685c3eb-d522-4c8b-a1fe-bdc0aa00deea" 00:07:05.938 } 00:07:05.938 ] 00:07:05.938 }, 00:07:05.938 { 00:07:05.938 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:05.938 "subtype": "NVMe", 00:07:05.938 "listen_addresses": [ 00:07:05.938 { 00:07:05.938 "transport": "TCP", 00:07:05.938 "trtype": "TCP", 00:07:05.938 "adrfam": "IPv4", 00:07:05.938 "traddr": "10.0.0.2", 00:07:05.938 "trsvcid": "4420" 00:07:05.938 } 00:07:05.938 ], 00:07:05.938 "allow_any_host": true, 00:07:05.938 "hosts": [], 00:07:05.938 "serial_number": "SPDK00000000000003", 00:07:05.938 "model_number": "SPDK bdev Controller", 00:07:05.938 "max_namespaces": 32, 00:07:05.938 "min_cntlid": 1, 00:07:05.938 "max_cntlid": 65519, 00:07:05.938 "namespaces": [ 00:07:05.938 { 00:07:05.938 "nsid": 1, 00:07:05.938 "bdev_name": "Null3", 00:07:05.938 "name": "Null3", 00:07:05.938 "nguid": "D86D29FFA0EF471EA8B9580ADF662BC7", 00:07:05.938 "uuid": "d86d29ff-a0ef-471e-a8b9-580adf662bc7" 00:07:05.938 } 00:07:05.938 ] 00:07:05.938 }, 00:07:05.938 { 00:07:05.938 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:05.938 "subtype": "NVMe", 00:07:05.938 "listen_addresses": [ 00:07:05.938 { 00:07:05.938 "transport": "TCP", 00:07:05.938 "trtype": "TCP", 00:07:05.938 "adrfam": "IPv4", 00:07:05.938 "traddr": "10.0.0.2", 00:07:05.938 "trsvcid": "4420" 00:07:05.938 } 00:07:05.938 ], 00:07:05.938 "allow_any_host": true, 00:07:05.938 "hosts": [], 00:07:05.938 "serial_number": "SPDK00000000000004", 00:07:05.938 "model_number": "SPDK bdev Controller", 00:07:05.938 "max_namespaces": 32, 00:07:05.938 "min_cntlid": 1, 00:07:05.938 "max_cntlid": 65519, 00:07:05.938 "namespaces": [ 00:07:05.938 { 00:07:05.938 "nsid": 1, 00:07:05.938 "bdev_name": "Null4", 00:07:05.938 "name": "Null4", 00:07:05.938 "nguid": "C9B6B0D5EAD04233B23D444AF0984DB2", 00:07:05.938 "uuid": "c9b6b0d5-ead0-4233-b23d-444af0984db2" 00:07:05.938 } 00:07:05.938 ] 00:07:05.938 } 00:07:05.938 ] 00:07:05.938 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.938 06:47:12 -- target/discovery.sh@42 -- # seq 1 4 00:07:05.938 06:47:12 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:05.938 06:47:12 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.938 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.938 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.938 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.938 06:47:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:05.938 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.938 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.938 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.938 06:47:12 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:05.938 06:47:12 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:05.938 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.938 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.938 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.938 06:47:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:05.938 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.938 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.938 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.938 06:47:12 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:05.938 06:47:12 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:05.938 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.938 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.938 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.938 06:47:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:05.938 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.938 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.938 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.938 06:47:12 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:05.938 06:47:12 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:05.938 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.938 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.938 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.938 06:47:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:05.938 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.938 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.938 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.938 06:47:12 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:05.938 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.938 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.938 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.938 06:47:12 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:05.938 06:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.938 06:47:12 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:05.938 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.938 06:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.938 06:47:12 -- target/discovery.sh@49 -- # check_bdevs= 00:07:05.938 06:47:12 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:05.938 06:47:12 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:05.938 06:47:12 -- target/discovery.sh@57 -- # nvmftestfini 00:07:05.938 06:47:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:05.938 06:47:12 -- nvmf/common.sh@116 -- # sync 00:07:05.938 06:47:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:05.938 06:47:12 -- nvmf/common.sh@119 -- # set +e 00:07:05.938 06:47:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:05.938 06:47:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:05.938 rmmod nvme_tcp 00:07:05.938 rmmod nvme_fabrics 00:07:05.938 rmmod nvme_keyring 00:07:05.938 06:47:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:05.938 06:47:12 -- nvmf/common.sh@123 -- # set -e 00:07:05.938 06:47:12 -- nvmf/common.sh@124 -- # return 0 00:07:05.938 06:47:12 -- nvmf/common.sh@477 -- # '[' -n 2936739 ']' 00:07:05.938 06:47:12 -- nvmf/common.sh@478 -- # killprocess 2936739 00:07:05.938 06:47:12 -- common/autotest_common.sh@926 -- # '[' -z 2936739 ']' 00:07:05.938 06:47:12 -- common/autotest_common.sh@930 -- # kill -0 2936739 00:07:05.938 06:47:12 -- common/autotest_common.sh@931 -- # uname 00:07:05.938 06:47:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:05.938 06:47:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2936739 00:07:05.938 06:47:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:05.938 06:47:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:05.938 06:47:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2936739' 00:07:05.938 killing process with pid 2936739 00:07:05.938 06:47:13 -- common/autotest_common.sh@945 -- # kill 2936739 00:07:05.938 [2024-05-12 06:47:13.031874] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:05.938 06:47:13 -- common/autotest_common.sh@950 -- # wait 2936739 00:07:06.197 06:47:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:06.197 06:47:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:06.197 06:47:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:06.197 06:47:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:06.197 06:47:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:06.197 06:47:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.197 06:47:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.197 06:47:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.768 06:47:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:08.768 00:07:08.768 real 0m5.963s 00:07:08.768 user 0m6.853s 00:07:08.768 sys 0m1.808s 00:07:08.768 06:47:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.768 06:47:15 -- common/autotest_common.sh@10 -- # set +x 00:07:08.768 ************************************ 00:07:08.768 END TEST nvmf_discovery 00:07:08.768 ************************************ 00:07:08.768 06:47:15 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:08.768 06:47:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:08.768 06:47:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.768 06:47:15 -- common/autotest_common.sh@10 -- # set +x 00:07:08.768 ************************************ 00:07:08.768 START TEST nvmf_referrals 00:07:08.768 ************************************ 00:07:08.768 06:47:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:08.768 * Looking for test storage... 00:07:08.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.768 06:47:15 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.768 06:47:15 -- nvmf/common.sh@7 -- # uname -s 00:07:08.768 06:47:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.768 06:47:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.768 06:47:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.768 06:47:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.768 06:47:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.768 06:47:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.768 06:47:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.768 06:47:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.768 06:47:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.768 06:47:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.768 06:47:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:08.768 06:47:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:08.768 06:47:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.768 06:47:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.768 06:47:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.768 06:47:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.768 06:47:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.768 06:47:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.768 06:47:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.768 06:47:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.768 06:47:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.768 06:47:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.768 06:47:15 -- paths/export.sh@5 -- # export PATH 00:07:08.768 06:47:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.769 06:47:15 -- nvmf/common.sh@46 -- # : 0 00:07:08.769 06:47:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:08.769 06:47:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:08.769 06:47:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:08.769 06:47:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.769 06:47:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.769 06:47:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:08.769 06:47:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:08.769 06:47:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:08.769 06:47:15 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:08.769 06:47:15 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:08.769 06:47:15 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:08.769 06:47:15 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:08.769 06:47:15 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:08.769 06:47:15 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:08.769 06:47:15 -- target/referrals.sh@37 -- # nvmftestinit 00:07:08.769 06:47:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:08.769 06:47:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.769 06:47:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:08.769 06:47:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:08.769 06:47:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:08.769 06:47:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.769 06:47:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.769 06:47:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.769 06:47:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:08.769 06:47:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:08.769 06:47:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:08.769 06:47:15 -- common/autotest_common.sh@10 -- # set +x 00:07:10.675 06:47:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:10.675 06:47:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:10.675 06:47:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:10.675 06:47:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:10.675 06:47:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:10.675 06:47:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:10.675 06:47:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:10.675 06:47:17 -- nvmf/common.sh@294 -- # net_devs=() 00:07:10.675 06:47:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:10.675 06:47:17 -- nvmf/common.sh@295 -- # e810=() 00:07:10.675 06:47:17 -- nvmf/common.sh@295 -- # local -ga e810 00:07:10.675 06:47:17 -- nvmf/common.sh@296 -- # x722=() 00:07:10.675 06:47:17 -- nvmf/common.sh@296 -- # local -ga x722 00:07:10.675 06:47:17 -- nvmf/common.sh@297 -- # mlx=() 00:07:10.675 06:47:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:10.675 06:47:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.675 06:47:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.675 06:47:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.675 06:47:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.675 06:47:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.675 06:47:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.675 06:47:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.675 06:47:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.675 06:47:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.675 06:47:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.675 06:47:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.675 06:47:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:10.675 06:47:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:10.675 06:47:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:10.675 06:47:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:10.675 06:47:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:10.675 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:10.675 06:47:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:10.675 06:47:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:10.675 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:10.675 06:47:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:10.675 06:47:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:10.675 06:47:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.675 06:47:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:10.675 06:47:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.675 06:47:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:10.675 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:10.675 06:47:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.675 06:47:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:10.675 06:47:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.675 06:47:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:10.675 06:47:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.675 06:47:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:10.675 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:10.675 06:47:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.675 06:47:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:10.675 06:47:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:10.675 06:47:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:10.675 06:47:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.675 06:47:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.675 06:47:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.675 06:47:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:10.675 06:47:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.675 06:47:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.675 06:47:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:10.675 06:47:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.675 06:47:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.675 06:47:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:10.675 06:47:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:10.675 06:47:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.675 06:47:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.675 06:47:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.675 06:47:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.675 06:47:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:10.675 06:47:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.675 06:47:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.675 06:47:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.675 06:47:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:10.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:07:10.675 00:07:10.675 --- 10.0.0.2 ping statistics --- 00:07:10.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.675 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:10.675 06:47:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:07:10.675 00:07:10.675 --- 10.0.0.1 ping statistics --- 00:07:10.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.675 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:10.675 06:47:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.675 06:47:17 -- nvmf/common.sh@410 -- # return 0 00:07:10.675 06:47:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:10.675 06:47:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.675 06:47:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:10.675 06:47:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.675 06:47:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:10.675 06:47:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:10.675 06:47:17 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:10.675 06:47:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:10.675 06:47:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:10.675 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:07:10.675 06:47:17 -- nvmf/common.sh@469 -- # nvmfpid=2938866 00:07:10.675 06:47:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:10.675 06:47:17 -- nvmf/common.sh@470 -- # waitforlisten 2938866 00:07:10.675 06:47:17 -- common/autotest_common.sh@819 -- # '[' -z 2938866 ']' 00:07:10.675 06:47:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.675 06:47:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:10.675 06:47:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.676 06:47:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:10.676 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:07:10.676 [2024-05-12 06:47:17.671948] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:10.676 [2024-05-12 06:47:17.672054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.676 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.676 [2024-05-12 06:47:17.735741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.934 [2024-05-12 06:47:17.846591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.934 [2024-05-12 06:47:17.846740] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.934 [2024-05-12 06:47:17.846758] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.934 [2024-05-12 06:47:17.846771] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.934 [2024-05-12 06:47:17.846820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.934 [2024-05-12 06:47:17.846881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.934 [2024-05-12 06:47:17.846947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.934 [2024-05-12 06:47:17.846950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.868 06:47:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:11.868 06:47:18 -- common/autotest_common.sh@852 -- # return 0 00:07:11.868 06:47:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:11.868 06:47:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:11.868 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.868 06:47:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.868 06:47:18 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:11.868 06:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.868 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.868 [2024-05-12 06:47:18.672286] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.868 06:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.868 06:47:18 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:11.868 06:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.868 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.868 [2024-05-12 06:47:18.684444] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:11.868 06:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.868 06:47:18 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:11.868 06:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.868 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.868 06:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.868 06:47:18 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:11.868 06:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.868 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.868 06:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.868 06:47:18 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:11.868 06:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.868 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.868 06:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.868 06:47:18 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:11.868 06:47:18 -- target/referrals.sh@48 -- # jq length 00:07:11.868 06:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.868 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.868 06:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.868 06:47:18 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:11.868 06:47:18 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:11.868 06:47:18 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:11.868 06:47:18 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:11.868 06:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.868 06:47:18 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:11.868 06:47:18 -- target/referrals.sh@21 -- # sort 00:07:11.868 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.868 06:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.868 06:47:18 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:11.868 06:47:18 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:11.868 06:47:18 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:11.868 06:47:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:11.868 06:47:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:11.868 06:47:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:11.868 06:47:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:11.868 06:47:18 -- target/referrals.sh@26 -- # sort 00:07:11.868 06:47:18 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:11.868 06:47:18 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:11.868 06:47:18 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:11.868 06:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.868 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.868 06:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.868 06:47:18 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:11.868 06:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.868 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.869 06:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.869 06:47:18 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:11.869 06:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.869 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.869 06:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.869 06:47:18 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:11.869 06:47:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.869 06:47:18 -- target/referrals.sh@56 -- # jq length 00:07:11.869 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:07:11.869 06:47:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.869 06:47:18 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:11.869 06:47:18 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:11.869 06:47:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:11.869 06:47:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:11.869 06:47:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:11.869 06:47:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:11.869 06:47:18 -- target/referrals.sh@26 -- # sort 00:07:12.127 06:47:19 -- target/referrals.sh@26 -- # echo 00:07:12.127 06:47:19 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:12.127 06:47:19 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:12.127 06:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.127 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:07:12.127 06:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.127 06:47:19 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:12.127 06:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.127 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:07:12.127 06:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.127 06:47:19 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:12.127 06:47:19 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:12.127 06:47:19 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:12.127 06:47:19 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:12.127 06:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.127 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:07:12.127 06:47:19 -- target/referrals.sh@21 -- # sort 00:07:12.127 06:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.127 06:47:19 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:12.127 06:47:19 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:12.127 06:47:19 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:12.127 06:47:19 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:12.127 06:47:19 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:12.127 06:47:19 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:12.127 06:47:19 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:12.127 06:47:19 -- target/referrals.sh@26 -- # sort 00:07:12.386 06:47:19 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:12.386 06:47:19 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:12.386 06:47:19 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:12.386 06:47:19 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:12.386 06:47:19 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:12.386 06:47:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:12.386 06:47:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:12.386 06:47:19 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:12.386 06:47:19 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:12.386 06:47:19 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:12.386 06:47:19 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:12.386 06:47:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:12.386 06:47:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:12.642 06:47:19 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:12.642 06:47:19 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:12.642 06:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.642 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:07:12.642 06:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.643 06:47:19 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:12.643 06:47:19 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:12.643 06:47:19 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:12.643 06:47:19 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:12.643 06:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.643 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:07:12.643 06:47:19 -- target/referrals.sh@21 -- # sort 00:07:12.643 06:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.643 06:47:19 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:12.643 06:47:19 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:12.643 06:47:19 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:12.643 06:47:19 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:12.643 06:47:19 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:12.643 06:47:19 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:12.643 06:47:19 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:12.643 06:47:19 -- target/referrals.sh@26 -- # sort 00:07:12.643 06:47:19 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:12.643 06:47:19 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:12.643 06:47:19 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:12.643 06:47:19 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:12.643 06:47:19 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:12.643 06:47:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:12.643 06:47:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:12.901 06:47:19 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:12.901 06:47:19 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:12.901 06:47:19 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:12.901 06:47:19 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:12.901 06:47:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:12.901 06:47:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:12.901 06:47:19 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:12.901 06:47:19 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:12.901 06:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.901 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:07:12.901 06:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.901 06:47:19 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:12.901 06:47:19 -- target/referrals.sh@82 -- # jq length 00:07:12.901 06:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.901 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:07:12.901 06:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.901 06:47:19 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:12.901 06:47:19 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:12.901 06:47:19 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:12.901 06:47:19 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:12.901 06:47:19 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:12.901 06:47:19 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:12.901 06:47:19 -- target/referrals.sh@26 -- # sort 00:07:12.901 06:47:20 -- target/referrals.sh@26 -- # echo 00:07:12.901 06:47:20 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:12.901 06:47:20 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:12.901 06:47:20 -- target/referrals.sh@86 -- # nvmftestfini 00:07:12.901 06:47:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:12.901 06:47:20 -- nvmf/common.sh@116 -- # sync 00:07:12.901 06:47:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:12.901 06:47:20 -- nvmf/common.sh@119 -- # set +e 00:07:12.901 06:47:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:12.901 06:47:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:12.901 rmmod nvme_tcp 00:07:13.160 rmmod nvme_fabrics 00:07:13.160 rmmod nvme_keyring 00:07:13.160 06:47:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:13.160 06:47:20 -- nvmf/common.sh@123 -- # set -e 00:07:13.160 06:47:20 -- nvmf/common.sh@124 -- # return 0 00:07:13.160 06:47:20 -- nvmf/common.sh@477 -- # '[' -n 2938866 ']' 00:07:13.160 06:47:20 -- nvmf/common.sh@478 -- # killprocess 2938866 00:07:13.160 06:47:20 -- common/autotest_common.sh@926 -- # '[' -z 2938866 ']' 00:07:13.160 06:47:20 -- common/autotest_common.sh@930 -- # kill -0 2938866 00:07:13.160 06:47:20 -- common/autotest_common.sh@931 -- # uname 00:07:13.160 06:47:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:13.160 06:47:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2938866 00:07:13.160 06:47:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:13.160 06:47:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:13.160 06:47:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2938866' 00:07:13.160 killing process with pid 2938866 00:07:13.160 06:47:20 -- common/autotest_common.sh@945 -- # kill 2938866 00:07:13.160 06:47:20 -- common/autotest_common.sh@950 -- # wait 2938866 00:07:13.418 06:47:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:13.418 06:47:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:13.418 06:47:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:13.418 06:47:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:13.418 06:47:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:13.418 06:47:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.418 06:47:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.418 06:47:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.339 06:47:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:15.339 00:07:15.339 real 0m7.045s 00:07:15.339 user 0m11.527s 00:07:15.339 sys 0m1.993s 00:07:15.339 06:47:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.339 06:47:22 -- common/autotest_common.sh@10 -- # set +x 00:07:15.339 ************************************ 00:07:15.339 END TEST nvmf_referrals 00:07:15.339 ************************************ 00:07:15.339 06:47:22 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:15.339 06:47:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:15.339 06:47:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.339 06:47:22 -- common/autotest_common.sh@10 -- # set +x 00:07:15.339 ************************************ 00:07:15.339 START TEST nvmf_connect_disconnect 00:07:15.339 ************************************ 00:07:15.339 06:47:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:15.631 * Looking for test storage... 00:07:15.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.631 06:47:22 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.631 06:47:22 -- nvmf/common.sh@7 -- # uname -s 00:07:15.631 06:47:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.631 06:47:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.631 06:47:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.631 06:47:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.631 06:47:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.631 06:47:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.631 06:47:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.631 06:47:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.631 06:47:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.631 06:47:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.631 06:47:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:15.631 06:47:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:15.631 06:47:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.631 06:47:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.631 06:47:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.631 06:47:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.631 06:47:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.631 06:47:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.631 06:47:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.631 06:47:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.631 06:47:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.631 06:47:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.631 06:47:22 -- paths/export.sh@5 -- # export PATH 00:07:15.631 06:47:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.631 06:47:22 -- nvmf/common.sh@46 -- # : 0 00:07:15.631 06:47:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:15.631 06:47:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:15.631 06:47:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:15.631 06:47:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.631 06:47:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.631 06:47:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:15.631 06:47:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:15.631 06:47:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:15.631 06:47:22 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:15.631 06:47:22 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:15.632 06:47:22 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:15.632 06:47:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:15.632 06:47:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.632 06:47:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:15.632 06:47:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:15.632 06:47:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:15.632 06:47:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.632 06:47:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.632 06:47:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.632 06:47:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:15.632 06:47:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:15.632 06:47:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:15.632 06:47:22 -- common/autotest_common.sh@10 -- # set +x 00:07:17.539 06:47:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:17.539 06:47:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:17.539 06:47:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:17.539 06:47:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:17.539 06:47:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:17.539 06:47:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:17.539 06:47:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:17.540 06:47:24 -- nvmf/common.sh@294 -- # net_devs=() 00:07:17.540 06:47:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:17.540 06:47:24 -- nvmf/common.sh@295 -- # e810=() 00:07:17.540 06:47:24 -- nvmf/common.sh@295 -- # local -ga e810 00:07:17.540 06:47:24 -- nvmf/common.sh@296 -- # x722=() 00:07:17.540 06:47:24 -- nvmf/common.sh@296 -- # local -ga x722 00:07:17.540 06:47:24 -- nvmf/common.sh@297 -- # mlx=() 00:07:17.540 06:47:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:17.540 06:47:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.540 06:47:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.540 06:47:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.540 06:47:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.540 06:47:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.540 06:47:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.540 06:47:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.540 06:47:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.540 06:47:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.540 06:47:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.540 06:47:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.540 06:47:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:17.540 06:47:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:17.540 06:47:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:17.540 06:47:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:17.540 06:47:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:17.540 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:17.540 06:47:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:17.540 06:47:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:17.540 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:17.540 06:47:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:17.540 06:47:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:17.540 06:47:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.540 06:47:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:17.540 06:47:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.540 06:47:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:17.540 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:17.540 06:47:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.540 06:47:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:17.540 06:47:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.540 06:47:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:17.540 06:47:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.540 06:47:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:17.540 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:17.540 06:47:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.540 06:47:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:17.540 06:47:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:17.540 06:47:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:17.540 06:47:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:17.540 06:47:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.540 06:47:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.540 06:47:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.540 06:47:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:17.540 06:47:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.540 06:47:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.540 06:47:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:17.540 06:47:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.540 06:47:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.540 06:47:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:17.540 06:47:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:17.540 06:47:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.540 06:47:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.540 06:47:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.540 06:47:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.540 06:47:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:17.540 06:47:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.540 06:47:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.540 06:47:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.540 06:47:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:17.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:07:17.540 00:07:17.540 --- 10.0.0.2 ping statistics --- 00:07:17.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.540 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:07:17.540 06:47:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:07:17.540 00:07:17.540 --- 10.0.0.1 ping statistics --- 00:07:17.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.540 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:07:17.540 06:47:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.540 06:47:24 -- nvmf/common.sh@410 -- # return 0 00:07:17.801 06:47:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:17.801 06:47:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.801 06:47:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:17.801 06:47:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:17.801 06:47:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.801 06:47:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:17.801 06:47:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:17.801 06:47:24 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:17.801 06:47:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:17.801 06:47:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:17.801 06:47:24 -- common/autotest_common.sh@10 -- # set +x 00:07:17.801 06:47:24 -- nvmf/common.sh@469 -- # nvmfpid=2941181 00:07:17.801 06:47:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:17.801 06:47:24 -- nvmf/common.sh@470 -- # waitforlisten 2941181 00:07:17.801 06:47:24 -- common/autotest_common.sh@819 -- # '[' -z 2941181 ']' 00:07:17.801 06:47:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.801 06:47:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:17.801 06:47:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.801 06:47:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:17.801 06:47:24 -- common/autotest_common.sh@10 -- # set +x 00:07:17.801 [2024-05-12 06:47:24.734114] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:17.801 [2024-05-12 06:47:24.734200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.801 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.801 [2024-05-12 06:47:24.808501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.059 [2024-05-12 06:47:24.932673] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:18.059 [2024-05-12 06:47:24.932847] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.059 [2024-05-12 06:47:24.932869] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.059 [2024-05-12 06:47:24.932884] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.059 [2024-05-12 06:47:24.932976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.059 [2024-05-12 06:47:24.933029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.059 [2024-05-12 06:47:24.933081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.059 [2024-05-12 06:47:24.933084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.626 06:47:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.626 06:47:25 -- common/autotest_common.sh@852 -- # return 0 00:07:18.626 06:47:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:18.626 06:47:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:18.626 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.626 06:47:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.626 06:47:25 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:18.626 06:47:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.626 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.626 [2024-05-12 06:47:25.733240] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.626 06:47:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.626 06:47:25 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:18.626 06:47:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.626 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.885 06:47:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.885 06:47:25 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:18.885 06:47:25 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:18.885 06:47:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.885 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.885 06:47:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.885 06:47:25 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:18.885 06:47:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.885 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.885 06:47:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.885 06:47:25 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.885 06:47:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.885 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.885 [2024-05-12 06:47:25.786527] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.885 06:47:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.885 06:47:25 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:18.885 06:47:25 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:18.885 06:47:25 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:18.885 06:47:25 -- target/connect_disconnect.sh@34 -- # set +x 00:07:21.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:23.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.078 06:51:12 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:05.078 06:51:12 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:05.078 06:51:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:05.078 06:51:12 -- nvmf/common.sh@116 -- # sync 00:11:05.078 06:51:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:05.078 06:51:12 -- nvmf/common.sh@119 -- # set +e 00:11:05.078 06:51:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:05.078 06:51:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:05.078 rmmod nvme_tcp 00:11:05.338 rmmod nvme_fabrics 00:11:05.338 rmmod nvme_keyring 00:11:05.338 06:51:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:05.338 06:51:12 -- nvmf/common.sh@123 -- # set -e 00:11:05.338 06:51:12 -- nvmf/common.sh@124 -- # return 0 00:11:05.338 06:51:12 -- nvmf/common.sh@477 -- # '[' -n 2941181 ']' 00:11:05.338 06:51:12 -- nvmf/common.sh@478 -- # killprocess 2941181 00:11:05.338 06:51:12 -- common/autotest_common.sh@926 -- # '[' -z 2941181 ']' 00:11:05.338 06:51:12 -- common/autotest_common.sh@930 -- # kill -0 2941181 00:11:05.338 06:51:12 -- common/autotest_common.sh@931 -- # uname 00:11:05.338 06:51:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:05.338 06:51:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2941181 00:11:05.338 06:51:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:05.338 06:51:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:05.338 06:51:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2941181' 00:11:05.338 killing process with pid 2941181 00:11:05.338 06:51:12 -- common/autotest_common.sh@945 -- # kill 2941181 00:11:05.338 06:51:12 -- common/autotest_common.sh@950 -- # wait 2941181 00:11:05.595 06:51:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:05.595 06:51:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:05.595 06:51:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:05.595 06:51:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.595 06:51:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:05.595 06:51:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.595 06:51:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.595 06:51:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.497 06:51:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:07.497 00:11:07.497 real 3m52.164s 00:11:07.497 user 14m43.757s 00:11:07.497 sys 0m31.123s 00:11:07.497 06:51:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.497 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:11:07.497 ************************************ 00:11:07.497 END TEST nvmf_connect_disconnect 00:11:07.497 ************************************ 00:11:07.758 06:51:14 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:07.758 06:51:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:07.758 06:51:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:07.758 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:11:07.758 ************************************ 00:11:07.758 START TEST nvmf_multitarget 00:11:07.758 ************************************ 00:11:07.758 06:51:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:07.758 * Looking for test storage... 00:11:07.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.758 06:51:14 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.758 06:51:14 -- nvmf/common.sh@7 -- # uname -s 00:11:07.758 06:51:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.758 06:51:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.758 06:51:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.758 06:51:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.758 06:51:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.758 06:51:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.758 06:51:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.758 06:51:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.758 06:51:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.758 06:51:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.758 06:51:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:07.758 06:51:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:07.758 06:51:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.758 06:51:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.758 06:51:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.758 06:51:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.758 06:51:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.758 06:51:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.758 06:51:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.758 06:51:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.758 06:51:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.758 06:51:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.758 06:51:14 -- paths/export.sh@5 -- # export PATH 00:11:07.758 06:51:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.758 06:51:14 -- nvmf/common.sh@46 -- # : 0 00:11:07.758 06:51:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:07.758 06:51:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:07.758 06:51:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:07.758 06:51:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.758 06:51:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.758 06:51:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:07.758 06:51:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:07.758 06:51:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:07.758 06:51:14 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:07.758 06:51:14 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:07.758 06:51:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:07.758 06:51:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.758 06:51:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:07.758 06:51:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:07.758 06:51:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:07.758 06:51:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.758 06:51:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.758 06:51:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.758 06:51:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:07.758 06:51:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:07.758 06:51:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:07.758 06:51:14 -- common/autotest_common.sh@10 -- # set +x 00:11:09.666 06:51:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:09.666 06:51:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:09.666 06:51:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:09.666 06:51:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:09.666 06:51:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:09.666 06:51:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:09.666 06:51:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:09.666 06:51:16 -- nvmf/common.sh@294 -- # net_devs=() 00:11:09.666 06:51:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:09.666 06:51:16 -- nvmf/common.sh@295 -- # e810=() 00:11:09.666 06:51:16 -- nvmf/common.sh@295 -- # local -ga e810 00:11:09.666 06:51:16 -- nvmf/common.sh@296 -- # x722=() 00:11:09.666 06:51:16 -- nvmf/common.sh@296 -- # local -ga x722 00:11:09.666 06:51:16 -- nvmf/common.sh@297 -- # mlx=() 00:11:09.666 06:51:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:09.666 06:51:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.666 06:51:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.666 06:51:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.666 06:51:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.666 06:51:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.666 06:51:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.666 06:51:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.666 06:51:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.666 06:51:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.666 06:51:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.666 06:51:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.666 06:51:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:09.666 06:51:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:09.666 06:51:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:09.666 06:51:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:09.666 06:51:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:09.666 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:09.666 06:51:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:09.666 06:51:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:09.666 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:09.666 06:51:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:09.666 06:51:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:09.666 06:51:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.666 06:51:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:09.666 06:51:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.666 06:51:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:09.666 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:09.666 06:51:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.666 06:51:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:09.666 06:51:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.666 06:51:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:09.666 06:51:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.666 06:51:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:09.666 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:09.666 06:51:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.666 06:51:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:09.666 06:51:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:09.666 06:51:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:09.666 06:51:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:09.666 06:51:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.666 06:51:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.666 06:51:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.666 06:51:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:09.666 06:51:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.666 06:51:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.666 06:51:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:09.666 06:51:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.666 06:51:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.666 06:51:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:09.666 06:51:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:09.666 06:51:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.666 06:51:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.925 06:51:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.925 06:51:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.925 06:51:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:09.925 06:51:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.925 06:51:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.925 06:51:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.925 06:51:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:09.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:11:09.925 00:11:09.925 --- 10.0.0.2 ping statistics --- 00:11:09.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.925 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:11:09.925 06:51:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:11:09.925 00:11:09.925 --- 10.0.0.1 ping statistics --- 00:11:09.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.925 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:11:09.925 06:51:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.925 06:51:16 -- nvmf/common.sh@410 -- # return 0 00:11:09.925 06:51:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:09.925 06:51:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.925 06:51:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:09.925 06:51:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:09.925 06:51:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.925 06:51:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:09.925 06:51:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:09.925 06:51:16 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:09.925 06:51:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:09.925 06:51:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:09.925 06:51:16 -- common/autotest_common.sh@10 -- # set +x 00:11:09.925 06:51:16 -- nvmf/common.sh@469 -- # nvmfpid=2973254 00:11:09.925 06:51:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.925 06:51:16 -- nvmf/common.sh@470 -- # waitforlisten 2973254 00:11:09.925 06:51:16 -- common/autotest_common.sh@819 -- # '[' -z 2973254 ']' 00:11:09.925 06:51:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.926 06:51:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:09.926 06:51:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.926 06:51:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:09.926 06:51:16 -- common/autotest_common.sh@10 -- # set +x 00:11:09.926 [2024-05-12 06:51:16.931775] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:09.926 [2024-05-12 06:51:16.931862] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.926 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.926 [2024-05-12 06:51:16.996988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.185 [2024-05-12 06:51:17.116611] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:10.185 [2024-05-12 06:51:17.116764] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.185 [2024-05-12 06:51:17.116783] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.185 [2024-05-12 06:51:17.116796] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.185 [2024-05-12 06:51:17.116853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.185 [2024-05-12 06:51:17.116884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.185 [2024-05-12 06:51:17.116942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.185 [2024-05-12 06:51:17.116944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.121 06:51:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:11.121 06:51:17 -- common/autotest_common.sh@852 -- # return 0 00:11:11.121 06:51:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:11.121 06:51:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:11.121 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:11:11.121 06:51:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.121 06:51:17 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:11.121 06:51:17 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:11.121 06:51:17 -- target/multitarget.sh@21 -- # jq length 00:11:11.121 06:51:18 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:11.121 06:51:18 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:11.121 "nvmf_tgt_1" 00:11:11.121 06:51:18 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:11.381 "nvmf_tgt_2" 00:11:11.381 06:51:18 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:11.381 06:51:18 -- target/multitarget.sh@28 -- # jq length 00:11:11.381 06:51:18 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:11.381 06:51:18 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:11.381 true 00:11:11.381 06:51:18 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:11.676 true 00:11:11.676 06:51:18 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:11.676 06:51:18 -- target/multitarget.sh@35 -- # jq length 00:11:11.676 06:51:18 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:11.676 06:51:18 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:11.676 06:51:18 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:11.676 06:51:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:11.676 06:51:18 -- nvmf/common.sh@116 -- # sync 00:11:11.676 06:51:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:11.676 06:51:18 -- nvmf/common.sh@119 -- # set +e 00:11:11.676 06:51:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:11.676 06:51:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:11.676 rmmod nvme_tcp 00:11:11.676 rmmod nvme_fabrics 00:11:11.676 rmmod nvme_keyring 00:11:11.676 06:51:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:11.676 06:51:18 -- nvmf/common.sh@123 -- # set -e 00:11:11.676 06:51:18 -- nvmf/common.sh@124 -- # return 0 00:11:11.676 06:51:18 -- nvmf/common.sh@477 -- # '[' -n 2973254 ']' 00:11:11.676 06:51:18 -- nvmf/common.sh@478 -- # killprocess 2973254 00:11:11.676 06:51:18 -- common/autotest_common.sh@926 -- # '[' -z 2973254 ']' 00:11:11.676 06:51:18 -- common/autotest_common.sh@930 -- # kill -0 2973254 00:11:11.676 06:51:18 -- common/autotest_common.sh@931 -- # uname 00:11:11.676 06:51:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:11.676 06:51:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2973254 00:11:11.935 06:51:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:11.936 06:51:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:11.936 06:51:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2973254' 00:11:11.936 killing process with pid 2973254 00:11:11.936 06:51:18 -- common/autotest_common.sh@945 -- # kill 2973254 00:11:11.936 06:51:18 -- common/autotest_common.sh@950 -- # wait 2973254 00:11:12.196 06:51:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:12.196 06:51:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:12.196 06:51:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:12.196 06:51:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:12.196 06:51:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:12.196 06:51:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.196 06:51:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.196 06:51:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.101 06:51:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:14.101 00:11:14.101 real 0m6.466s 00:11:14.101 user 0m9.358s 00:11:14.101 sys 0m1.923s 00:11:14.101 06:51:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.101 06:51:21 -- common/autotest_common.sh@10 -- # set +x 00:11:14.101 ************************************ 00:11:14.101 END TEST nvmf_multitarget 00:11:14.101 ************************************ 00:11:14.101 06:51:21 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:14.101 06:51:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:14.101 06:51:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:14.101 06:51:21 -- common/autotest_common.sh@10 -- # set +x 00:11:14.101 ************************************ 00:11:14.101 START TEST nvmf_rpc 00:11:14.101 ************************************ 00:11:14.101 06:51:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:14.101 * Looking for test storage... 00:11:14.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.101 06:51:21 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.101 06:51:21 -- nvmf/common.sh@7 -- # uname -s 00:11:14.101 06:51:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.101 06:51:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.101 06:51:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.101 06:51:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.101 06:51:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.101 06:51:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.101 06:51:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.101 06:51:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.101 06:51:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.101 06:51:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.101 06:51:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.101 06:51:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.101 06:51:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.101 06:51:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.101 06:51:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.101 06:51:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.101 06:51:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.101 06:51:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.101 06:51:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.101 06:51:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.101 06:51:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.101 06:51:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.101 06:51:21 -- paths/export.sh@5 -- # export PATH 00:11:14.102 06:51:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.102 06:51:21 -- nvmf/common.sh@46 -- # : 0 00:11:14.102 06:51:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:14.102 06:51:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:14.102 06:51:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:14.102 06:51:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.102 06:51:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.102 06:51:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:14.102 06:51:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:14.102 06:51:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:14.102 06:51:21 -- target/rpc.sh@11 -- # loops=5 00:11:14.102 06:51:21 -- target/rpc.sh@23 -- # nvmftestinit 00:11:14.102 06:51:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:14.102 06:51:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.102 06:51:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:14.102 06:51:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:14.102 06:51:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:14.102 06:51:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.102 06:51:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.102 06:51:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.102 06:51:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:14.102 06:51:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:14.102 06:51:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:14.102 06:51:21 -- common/autotest_common.sh@10 -- # set +x 00:11:16.636 06:51:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:16.636 06:51:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:16.636 06:51:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:16.636 06:51:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:16.636 06:51:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:16.636 06:51:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:16.636 06:51:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:16.636 06:51:23 -- nvmf/common.sh@294 -- # net_devs=() 00:11:16.636 06:51:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:16.636 06:51:23 -- nvmf/common.sh@295 -- # e810=() 00:11:16.636 06:51:23 -- nvmf/common.sh@295 -- # local -ga e810 00:11:16.636 06:51:23 -- nvmf/common.sh@296 -- # x722=() 00:11:16.636 06:51:23 -- nvmf/common.sh@296 -- # local -ga x722 00:11:16.636 06:51:23 -- nvmf/common.sh@297 -- # mlx=() 00:11:16.636 06:51:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:16.636 06:51:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.636 06:51:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.636 06:51:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.636 06:51:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.636 06:51:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.636 06:51:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.636 06:51:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.636 06:51:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.636 06:51:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.636 06:51:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.636 06:51:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.636 06:51:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:16.636 06:51:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:16.636 06:51:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:16.636 06:51:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:16.636 06:51:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:16.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:16.636 06:51:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:16.636 06:51:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:16.636 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:16.636 06:51:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:16.636 06:51:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:16.636 06:51:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.636 06:51:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:16.636 06:51:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.636 06:51:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:16.636 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:16.636 06:51:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.636 06:51:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:16.636 06:51:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.636 06:51:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:16.636 06:51:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.636 06:51:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:16.636 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:16.636 06:51:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.636 06:51:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:16.636 06:51:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:16.636 06:51:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:16.636 06:51:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.636 06:51:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.636 06:51:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.636 06:51:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:16.636 06:51:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.636 06:51:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.636 06:51:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:16.636 06:51:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.636 06:51:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.636 06:51:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:16.636 06:51:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:16.636 06:51:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.636 06:51:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.636 06:51:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.636 06:51:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.636 06:51:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:16.636 06:51:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.636 06:51:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.636 06:51:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.636 06:51:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:16.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:11:16.636 00:11:16.636 --- 10.0.0.2 ping statistics --- 00:11:16.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.636 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:11:16.636 06:51:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:11:16.636 00:11:16.636 --- 10.0.0.1 ping statistics --- 00:11:16.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.636 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:16.636 06:51:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.636 06:51:23 -- nvmf/common.sh@410 -- # return 0 00:11:16.636 06:51:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:16.636 06:51:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.636 06:51:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:16.636 06:51:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.636 06:51:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:16.636 06:51:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:16.636 06:51:23 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:16.636 06:51:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:16.636 06:51:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:16.636 06:51:23 -- common/autotest_common.sh@10 -- # set +x 00:11:16.636 06:51:23 -- nvmf/common.sh@469 -- # nvmfpid=2975493 00:11:16.636 06:51:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.636 06:51:23 -- nvmf/common.sh@470 -- # waitforlisten 2975493 00:11:16.636 06:51:23 -- common/autotest_common.sh@819 -- # '[' -z 2975493 ']' 00:11:16.636 06:51:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.636 06:51:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:16.636 06:51:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.636 06:51:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:16.636 06:51:23 -- common/autotest_common.sh@10 -- # set +x 00:11:16.636 [2024-05-12 06:51:23.422784] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:16.636 [2024-05-12 06:51:23.422876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.636 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.636 [2024-05-12 06:51:23.488215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.636 [2024-05-12 06:51:23.600496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:16.636 [2024-05-12 06:51:23.600641] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.636 [2024-05-12 06:51:23.600659] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.636 [2024-05-12 06:51:23.600672] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.636 [2024-05-12 06:51:23.600740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.636 [2024-05-12 06:51:23.600762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.636 [2024-05-12 06:51:23.600834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.636 [2024-05-12 06:51:23.600837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.573 06:51:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:17.573 06:51:24 -- common/autotest_common.sh@852 -- # return 0 00:11:17.573 06:51:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:17.573 06:51:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:17.573 06:51:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.573 06:51:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.573 06:51:24 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:17.573 06:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.573 06:51:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.573 06:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.573 06:51:24 -- target/rpc.sh@26 -- # stats='{ 00:11:17.573 "tick_rate": 2700000000, 00:11:17.573 "poll_groups": [ 00:11:17.573 { 00:11:17.573 "name": "nvmf_tgt_poll_group_0", 00:11:17.573 "admin_qpairs": 0, 00:11:17.573 "io_qpairs": 0, 00:11:17.573 "current_admin_qpairs": 0, 00:11:17.573 "current_io_qpairs": 0, 00:11:17.573 "pending_bdev_io": 0, 00:11:17.573 "completed_nvme_io": 0, 00:11:17.573 "transports": [] 00:11:17.573 }, 00:11:17.573 { 00:11:17.573 "name": "nvmf_tgt_poll_group_1", 00:11:17.573 "admin_qpairs": 0, 00:11:17.573 "io_qpairs": 0, 00:11:17.573 "current_admin_qpairs": 0, 00:11:17.573 "current_io_qpairs": 0, 00:11:17.573 "pending_bdev_io": 0, 00:11:17.573 "completed_nvme_io": 0, 00:11:17.573 "transports": [] 00:11:17.573 }, 00:11:17.573 { 00:11:17.573 "name": "nvmf_tgt_poll_group_2", 00:11:17.573 "admin_qpairs": 0, 00:11:17.573 "io_qpairs": 0, 00:11:17.573 "current_admin_qpairs": 0, 00:11:17.573 "current_io_qpairs": 0, 00:11:17.573 "pending_bdev_io": 0, 00:11:17.573 "completed_nvme_io": 0, 00:11:17.573 "transports": [] 00:11:17.573 }, 00:11:17.573 { 00:11:17.573 "name": "nvmf_tgt_poll_group_3", 00:11:17.573 "admin_qpairs": 0, 00:11:17.573 "io_qpairs": 0, 00:11:17.573 "current_admin_qpairs": 0, 00:11:17.573 "current_io_qpairs": 0, 00:11:17.573 "pending_bdev_io": 0, 00:11:17.573 "completed_nvme_io": 0, 00:11:17.573 "transports": [] 00:11:17.573 } 00:11:17.573 ] 00:11:17.573 }' 00:11:17.573 06:51:24 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:17.573 06:51:24 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:17.573 06:51:24 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:17.573 06:51:24 -- target/rpc.sh@15 -- # wc -l 00:11:17.573 06:51:24 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:17.573 06:51:24 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:17.573 06:51:24 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:17.573 06:51:24 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.573 06:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.573 06:51:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.573 [2024-05-12 06:51:24.543763] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.573 06:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.573 06:51:24 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:17.573 06:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.573 06:51:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.573 06:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.573 06:51:24 -- target/rpc.sh@33 -- # stats='{ 00:11:17.573 "tick_rate": 2700000000, 00:11:17.573 "poll_groups": [ 00:11:17.573 { 00:11:17.573 "name": "nvmf_tgt_poll_group_0", 00:11:17.573 "admin_qpairs": 0, 00:11:17.573 "io_qpairs": 0, 00:11:17.573 "current_admin_qpairs": 0, 00:11:17.573 "current_io_qpairs": 0, 00:11:17.573 "pending_bdev_io": 0, 00:11:17.573 "completed_nvme_io": 0, 00:11:17.573 "transports": [ 00:11:17.573 { 00:11:17.573 "trtype": "TCP" 00:11:17.573 } 00:11:17.573 ] 00:11:17.573 }, 00:11:17.573 { 00:11:17.573 "name": "nvmf_tgt_poll_group_1", 00:11:17.573 "admin_qpairs": 0, 00:11:17.573 "io_qpairs": 0, 00:11:17.573 "current_admin_qpairs": 0, 00:11:17.573 "current_io_qpairs": 0, 00:11:17.573 "pending_bdev_io": 0, 00:11:17.573 "completed_nvme_io": 0, 00:11:17.573 "transports": [ 00:11:17.573 { 00:11:17.573 "trtype": "TCP" 00:11:17.573 } 00:11:17.573 ] 00:11:17.573 }, 00:11:17.573 { 00:11:17.573 "name": "nvmf_tgt_poll_group_2", 00:11:17.573 "admin_qpairs": 0, 00:11:17.573 "io_qpairs": 0, 00:11:17.573 "current_admin_qpairs": 0, 00:11:17.573 "current_io_qpairs": 0, 00:11:17.573 "pending_bdev_io": 0, 00:11:17.573 "completed_nvme_io": 0, 00:11:17.573 "transports": [ 00:11:17.573 { 00:11:17.573 "trtype": "TCP" 00:11:17.573 } 00:11:17.573 ] 00:11:17.573 }, 00:11:17.573 { 00:11:17.573 "name": "nvmf_tgt_poll_group_3", 00:11:17.573 "admin_qpairs": 0, 00:11:17.573 "io_qpairs": 0, 00:11:17.573 "current_admin_qpairs": 0, 00:11:17.573 "current_io_qpairs": 0, 00:11:17.573 "pending_bdev_io": 0, 00:11:17.573 "completed_nvme_io": 0, 00:11:17.573 "transports": [ 00:11:17.573 { 00:11:17.573 "trtype": "TCP" 00:11:17.573 } 00:11:17.574 ] 00:11:17.574 } 00:11:17.574 ] 00:11:17.574 }' 00:11:17.574 06:51:24 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:17.574 06:51:24 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:17.574 06:51:24 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:17.574 06:51:24 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:17.574 06:51:24 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:17.574 06:51:24 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:17.574 06:51:24 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:17.574 06:51:24 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:17.574 06:51:24 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:17.574 06:51:24 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:17.574 06:51:24 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:17.574 06:51:24 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:17.574 06:51:24 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:17.574 06:51:24 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:17.574 06:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.574 06:51:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.574 Malloc1 00:11:17.574 06:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.574 06:51:24 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.574 06:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.574 06:51:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.574 06:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.574 06:51:24 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:17.574 06:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.574 06:51:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.574 06:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.574 06:51:24 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:17.574 06:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.574 06:51:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.574 06:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.574 06:51:24 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.574 06:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.574 06:51:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.833 [2024-05-12 06:51:24.705097] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.833 06:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.833 06:51:24 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:17.833 06:51:24 -- common/autotest_common.sh@640 -- # local es=0 00:11:17.833 06:51:24 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:17.833 06:51:24 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:17.833 06:51:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:17.833 06:51:24 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:17.833 06:51:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:17.833 06:51:24 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:17.833 06:51:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:17.833 06:51:24 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:17.833 06:51:24 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:17.833 06:51:24 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:17.833 [2024-05-12 06:51:24.727787] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:17.833 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:17.833 could not add new controller: failed to write to nvme-fabrics device 00:11:17.833 06:51:24 -- common/autotest_common.sh@643 -- # es=1 00:11:17.833 06:51:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:17.833 06:51:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:17.833 06:51:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:17.833 06:51:24 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:17.833 06:51:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.833 06:51:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.833 06:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.833 06:51:24 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.401 06:51:25 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.401 06:51:25 -- common/autotest_common.sh@1177 -- # local i=0 00:11:18.401 06:51:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.401 06:51:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:18.401 06:51:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:20.308 06:51:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:20.308 06:51:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:20.308 06:51:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.308 06:51:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:20.308 06:51:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.308 06:51:27 -- common/autotest_common.sh@1187 -- # return 0 00:11:20.308 06:51:27 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.308 06:51:27 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.308 06:51:27 -- common/autotest_common.sh@1198 -- # local i=0 00:11:20.308 06:51:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:20.308 06:51:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.308 06:51:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:20.308 06:51:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.308 06:51:27 -- common/autotest_common.sh@1210 -- # return 0 00:11:20.308 06:51:27 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:20.308 06:51:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:20.308 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:11:20.308 06:51:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:20.308 06:51:27 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:20.308 06:51:27 -- common/autotest_common.sh@640 -- # local es=0 00:11:20.308 06:51:27 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:20.308 06:51:27 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:20.308 06:51:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:20.308 06:51:27 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:20.308 06:51:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:20.308 06:51:27 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:20.308 06:51:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:20.308 06:51:27 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:20.308 06:51:27 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:20.308 06:51:27 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:20.308 [2024-05-12 06:51:27.406460] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:20.308 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:20.308 could not add new controller: failed to write to nvme-fabrics device 00:11:20.308 06:51:27 -- common/autotest_common.sh@643 -- # es=1 00:11:20.308 06:51:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:20.308 06:51:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:20.309 06:51:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:20.309 06:51:27 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:20.309 06:51:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:20.309 06:51:27 -- common/autotest_common.sh@10 -- # set +x 00:11:20.309 06:51:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:20.309 06:51:27 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.246 06:51:28 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.246 06:51:28 -- common/autotest_common.sh@1177 -- # local i=0 00:11:21.246 06:51:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.246 06:51:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:21.246 06:51:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:23.155 06:51:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:23.155 06:51:30 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.155 06:51:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:23.155 06:51:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:23.155 06:51:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.155 06:51:30 -- common/autotest_common.sh@1187 -- # return 0 00:11:23.155 06:51:30 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:23.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.155 06:51:30 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:23.155 06:51:30 -- common/autotest_common.sh@1198 -- # local i=0 00:11:23.155 06:51:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:23.155 06:51:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.155 06:51:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:23.155 06:51:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.155 06:51:30 -- common/autotest_common.sh@1210 -- # return 0 00:11:23.155 06:51:30 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.155 06:51:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:23.155 06:51:30 -- common/autotest_common.sh@10 -- # set +x 00:11:23.155 06:51:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:23.155 06:51:30 -- target/rpc.sh@81 -- # seq 1 5 00:11:23.155 06:51:30 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:23.155 06:51:30 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:23.155 06:51:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:23.155 06:51:30 -- common/autotest_common.sh@10 -- # set +x 00:11:23.155 06:51:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:23.155 06:51:30 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.155 06:51:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:23.155 06:51:30 -- common/autotest_common.sh@10 -- # set +x 00:11:23.155 [2024-05-12 06:51:30.152004] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.155 06:51:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:23.155 06:51:30 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:23.155 06:51:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:23.155 06:51:30 -- common/autotest_common.sh@10 -- # set +x 00:11:23.155 06:51:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:23.155 06:51:30 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:23.155 06:51:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:23.155 06:51:30 -- common/autotest_common.sh@10 -- # set +x 00:11:23.155 06:51:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:23.155 06:51:30 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.721 06:51:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:23.721 06:51:30 -- common/autotest_common.sh@1177 -- # local i=0 00:11:23.721 06:51:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.721 06:51:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:23.721 06:51:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:25.626 06:51:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:25.626 06:51:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:25.626 06:51:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.626 06:51:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:25.626 06:51:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.626 06:51:32 -- common/autotest_common.sh@1187 -- # return 0 00:11:25.626 06:51:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.885 06:51:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.885 06:51:32 -- common/autotest_common.sh@1198 -- # local i=0 00:11:25.885 06:51:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:25.885 06:51:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.885 06:51:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:25.885 06:51:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.885 06:51:32 -- common/autotest_common.sh@1210 -- # return 0 00:11:25.885 06:51:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:25.885 06:51:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:25.885 06:51:32 -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 06:51:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:25.885 06:51:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.885 06:51:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:25.885 06:51:32 -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 06:51:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:25.885 06:51:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:25.885 06:51:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:25.885 06:51:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:25.885 06:51:32 -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 06:51:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:25.885 06:51:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.885 06:51:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:25.885 06:51:32 -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 [2024-05-12 06:51:32.911493] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.885 06:51:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:25.885 06:51:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:25.885 06:51:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:25.885 06:51:32 -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 06:51:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:25.885 06:51:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:25.885 06:51:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:25.885 06:51:32 -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 06:51:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:25.885 06:51:32 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.451 06:51:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:26.451 06:51:33 -- common/autotest_common.sh@1177 -- # local i=0 00:11:26.451 06:51:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.451 06:51:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:26.451 06:51:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:28.991 06:51:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:28.991 06:51:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:28.991 06:51:35 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.991 06:51:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:28.991 06:51:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.991 06:51:35 -- common/autotest_common.sh@1187 -- # return 0 00:11:28.991 06:51:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.991 06:51:35 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.991 06:51:35 -- common/autotest_common.sh@1198 -- # local i=0 00:11:28.991 06:51:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:28.991 06:51:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.991 06:51:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:28.991 06:51:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.991 06:51:35 -- common/autotest_common.sh@1210 -- # return 0 00:11:28.991 06:51:35 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:28.991 06:51:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.991 06:51:35 -- common/autotest_common.sh@10 -- # set +x 00:11:28.991 06:51:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.991 06:51:35 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.991 06:51:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.991 06:51:35 -- common/autotest_common.sh@10 -- # set +x 00:11:28.991 06:51:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.991 06:51:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:28.991 06:51:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:28.991 06:51:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.991 06:51:35 -- common/autotest_common.sh@10 -- # set +x 00:11:28.991 06:51:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.991 06:51:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.991 06:51:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.991 06:51:35 -- common/autotest_common.sh@10 -- # set +x 00:11:28.991 [2024-05-12 06:51:35.629264] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.991 06:51:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.991 06:51:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:28.991 06:51:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.991 06:51:35 -- common/autotest_common.sh@10 -- # set +x 00:11:28.991 06:51:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.991 06:51:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:28.991 06:51:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.991 06:51:35 -- common/autotest_common.sh@10 -- # set +x 00:11:28.991 06:51:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.991 06:51:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.292 06:51:36 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:29.292 06:51:36 -- common/autotest_common.sh@1177 -- # local i=0 00:11:29.292 06:51:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.292 06:51:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:29.292 06:51:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:31.197 06:51:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:31.197 06:51:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:31.197 06:51:38 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:31.197 06:51:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:31.197 06:51:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:31.197 06:51:38 -- common/autotest_common.sh@1187 -- # return 0 00:11:31.197 06:51:38 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.197 06:51:38 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.197 06:51:38 -- common/autotest_common.sh@1198 -- # local i=0 00:11:31.197 06:51:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:31.197 06:51:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.197 06:51:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:31.455 06:51:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.455 06:51:38 -- common/autotest_common.sh@1210 -- # return 0 00:11:31.455 06:51:38 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:31.455 06:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.455 06:51:38 -- common/autotest_common.sh@10 -- # set +x 00:11:31.455 06:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.455 06:51:38 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.455 06:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.455 06:51:38 -- common/autotest_common.sh@10 -- # set +x 00:11:31.455 06:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.455 06:51:38 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:31.455 06:51:38 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:31.455 06:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.455 06:51:38 -- common/autotest_common.sh@10 -- # set +x 00:11:31.455 06:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.455 06:51:38 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.455 06:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.455 06:51:38 -- common/autotest_common.sh@10 -- # set +x 00:11:31.455 [2024-05-12 06:51:38.366404] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.455 06:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.455 06:51:38 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:31.455 06:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.455 06:51:38 -- common/autotest_common.sh@10 -- # set +x 00:11:31.455 06:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.455 06:51:38 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:31.455 06:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.455 06:51:38 -- common/autotest_common.sh@10 -- # set +x 00:11:31.455 06:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.455 06:51:38 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:32.021 06:51:39 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:32.021 06:51:39 -- common/autotest_common.sh@1177 -- # local i=0 00:11:32.021 06:51:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.022 06:51:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:32.022 06:51:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:33.928 06:51:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:33.928 06:51:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:33.928 06:51:41 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.928 06:51:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:33.928 06:51:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.928 06:51:41 -- common/autotest_common.sh@1187 -- # return 0 00:11:33.929 06:51:41 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.188 06:51:41 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:34.188 06:51:41 -- common/autotest_common.sh@1198 -- # local i=0 00:11:34.188 06:51:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:34.188 06:51:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.188 06:51:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:34.188 06:51:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.188 06:51:41 -- common/autotest_common.sh@1210 -- # return 0 00:11:34.188 06:51:41 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:34.188 06:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:34.188 06:51:41 -- common/autotest_common.sh@10 -- # set +x 00:11:34.188 06:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:34.188 06:51:41 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.188 06:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:34.188 06:51:41 -- common/autotest_common.sh@10 -- # set +x 00:11:34.188 06:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:34.188 06:51:41 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:34.188 06:51:41 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:34.188 06:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:34.188 06:51:41 -- common/autotest_common.sh@10 -- # set +x 00:11:34.188 06:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:34.188 06:51:41 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.188 06:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:34.188 06:51:41 -- common/autotest_common.sh@10 -- # set +x 00:11:34.188 [2024-05-12 06:51:41.166937] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.188 06:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:34.188 06:51:41 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:34.188 06:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:34.188 06:51:41 -- common/autotest_common.sh@10 -- # set +x 00:11:34.188 06:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:34.188 06:51:41 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:34.188 06:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:34.188 06:51:41 -- common/autotest_common.sh@10 -- # set +x 00:11:34.188 06:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:34.188 06:51:41 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.755 06:51:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.755 06:51:41 -- common/autotest_common.sh@1177 -- # local i=0 00:11:34.755 06:51:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.755 06:51:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:34.755 06:51:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:37.291 06:51:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:37.291 06:51:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:37.291 06:51:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.291 06:51:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:37.291 06:51:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.291 06:51:43 -- common/autotest_common.sh@1187 -- # return 0 00:11:37.291 06:51:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.291 06:51:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.291 06:51:43 -- common/autotest_common.sh@1198 -- # local i=0 00:11:37.291 06:51:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:37.291 06:51:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.291 06:51:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:37.291 06:51:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.291 06:51:43 -- common/autotest_common.sh@1210 -- # return 0 00:11:37.291 06:51:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:37.291 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.291 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.291 06:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.291 06:51:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.291 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.291 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.291 06:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.291 06:51:43 -- target/rpc.sh@99 -- # seq 1 5 00:11:37.291 06:51:43 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:37.291 06:51:43 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:37.291 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.291 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.291 06:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.291 06:51:43 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.291 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.291 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.291 [2024-05-12 06:51:43.924817] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.291 06:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.291 06:51:43 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.291 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.291 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.291 06:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.291 06:51:43 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:37.291 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.291 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.291 06:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.291 06:51:43 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.292 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:43 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.292 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:43 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:37.292 06:51:43 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:37.292 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:43 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.292 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 [2024-05-12 06:51:43.972898] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.292 06:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:43 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.292 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:43 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:37.292 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:43 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.292 06:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:37.292 06:51:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 [2024-05-12 06:51:44.021034] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:37.292 06:51:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 [2024-05-12 06:51:44.069195] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:37.292 06:51:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 [2024-05-12 06:51:44.117367] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:37.292 06:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.292 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 06:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.292 06:51:44 -- target/rpc.sh@110 -- # stats='{ 00:11:37.292 "tick_rate": 2700000000, 00:11:37.292 "poll_groups": [ 00:11:37.292 { 00:11:37.292 "name": "nvmf_tgt_poll_group_0", 00:11:37.292 "admin_qpairs": 2, 00:11:37.292 "io_qpairs": 84, 00:11:37.292 "current_admin_qpairs": 0, 00:11:37.292 "current_io_qpairs": 0, 00:11:37.292 "pending_bdev_io": 0, 00:11:37.292 "completed_nvme_io": 183, 00:11:37.292 "transports": [ 00:11:37.292 { 00:11:37.292 "trtype": "TCP" 00:11:37.292 } 00:11:37.292 ] 00:11:37.292 }, 00:11:37.292 { 00:11:37.292 "name": "nvmf_tgt_poll_group_1", 00:11:37.292 "admin_qpairs": 2, 00:11:37.292 "io_qpairs": 84, 00:11:37.292 "current_admin_qpairs": 0, 00:11:37.292 "current_io_qpairs": 0, 00:11:37.292 "pending_bdev_io": 0, 00:11:37.292 "completed_nvme_io": 182, 00:11:37.292 "transports": [ 00:11:37.292 { 00:11:37.292 "trtype": "TCP" 00:11:37.292 } 00:11:37.292 ] 00:11:37.292 }, 00:11:37.292 { 00:11:37.292 "name": "nvmf_tgt_poll_group_2", 00:11:37.292 "admin_qpairs": 1, 00:11:37.292 "io_qpairs": 84, 00:11:37.292 "current_admin_qpairs": 0, 00:11:37.292 "current_io_qpairs": 0, 00:11:37.292 "pending_bdev_io": 0, 00:11:37.292 "completed_nvme_io": 185, 00:11:37.292 "transports": [ 00:11:37.292 { 00:11:37.292 "trtype": "TCP" 00:11:37.292 } 00:11:37.292 ] 00:11:37.292 }, 00:11:37.293 { 00:11:37.293 "name": "nvmf_tgt_poll_group_3", 00:11:37.293 "admin_qpairs": 2, 00:11:37.293 "io_qpairs": 84, 00:11:37.293 "current_admin_qpairs": 0, 00:11:37.293 "current_io_qpairs": 0, 00:11:37.293 "pending_bdev_io": 0, 00:11:37.293 "completed_nvme_io": 136, 00:11:37.293 "transports": [ 00:11:37.293 { 00:11:37.293 "trtype": "TCP" 00:11:37.293 } 00:11:37.293 ] 00:11:37.293 } 00:11:37.293 ] 00:11:37.293 }' 00:11:37.293 06:51:44 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:37.293 06:51:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:37.293 06:51:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:37.293 06:51:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:37.293 06:51:44 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:37.293 06:51:44 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:37.293 06:51:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:37.293 06:51:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:37.293 06:51:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:37.293 06:51:44 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:37.293 06:51:44 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:37.293 06:51:44 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:37.293 06:51:44 -- target/rpc.sh@123 -- # nvmftestfini 00:11:37.293 06:51:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:37.293 06:51:44 -- nvmf/common.sh@116 -- # sync 00:11:37.293 06:51:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:37.293 06:51:44 -- nvmf/common.sh@119 -- # set +e 00:11:37.293 06:51:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:37.293 06:51:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:37.293 rmmod nvme_tcp 00:11:37.293 rmmod nvme_fabrics 00:11:37.293 rmmod nvme_keyring 00:11:37.293 06:51:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:37.293 06:51:44 -- nvmf/common.sh@123 -- # set -e 00:11:37.293 06:51:44 -- nvmf/common.sh@124 -- # return 0 00:11:37.293 06:51:44 -- nvmf/common.sh@477 -- # '[' -n 2975493 ']' 00:11:37.293 06:51:44 -- nvmf/common.sh@478 -- # killprocess 2975493 00:11:37.293 06:51:44 -- common/autotest_common.sh@926 -- # '[' -z 2975493 ']' 00:11:37.293 06:51:44 -- common/autotest_common.sh@930 -- # kill -0 2975493 00:11:37.293 06:51:44 -- common/autotest_common.sh@931 -- # uname 00:11:37.293 06:51:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:37.293 06:51:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2975493 00:11:37.293 06:51:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:37.293 06:51:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:37.293 06:51:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2975493' 00:11:37.293 killing process with pid 2975493 00:11:37.293 06:51:44 -- common/autotest_common.sh@945 -- # kill 2975493 00:11:37.293 06:51:44 -- common/autotest_common.sh@950 -- # wait 2975493 00:11:37.552 06:51:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:37.552 06:51:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:37.552 06:51:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:37.552 06:51:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:37.552 06:51:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:37.552 06:51:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.552 06:51:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:37.552 06:51:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.089 06:51:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:40.089 00:11:40.089 real 0m25.527s 00:11:40.089 user 1m23.344s 00:11:40.089 sys 0m3.874s 00:11:40.089 06:51:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.089 06:51:46 -- common/autotest_common.sh@10 -- # set +x 00:11:40.089 ************************************ 00:11:40.089 END TEST nvmf_rpc 00:11:40.089 ************************************ 00:11:40.089 06:51:46 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:40.089 06:51:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:40.089 06:51:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:40.089 06:51:46 -- common/autotest_common.sh@10 -- # set +x 00:11:40.089 ************************************ 00:11:40.089 START TEST nvmf_invalid 00:11:40.089 ************************************ 00:11:40.090 06:51:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:40.090 * Looking for test storage... 00:11:40.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.090 06:51:46 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.090 06:51:46 -- nvmf/common.sh@7 -- # uname -s 00:11:40.090 06:51:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.090 06:51:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.090 06:51:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.090 06:51:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.090 06:51:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.090 06:51:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.090 06:51:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.090 06:51:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.090 06:51:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.090 06:51:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.090 06:51:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.090 06:51:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.090 06:51:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.090 06:51:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.090 06:51:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.090 06:51:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.090 06:51:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.090 06:51:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.090 06:51:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.090 06:51:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.090 06:51:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.090 06:51:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.090 06:51:46 -- paths/export.sh@5 -- # export PATH 00:11:40.090 06:51:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.090 06:51:46 -- nvmf/common.sh@46 -- # : 0 00:11:40.090 06:51:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:40.090 06:51:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:40.090 06:51:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:40.090 06:51:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.090 06:51:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.090 06:51:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:40.090 06:51:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:40.090 06:51:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:40.090 06:51:46 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:40.090 06:51:46 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:40.090 06:51:46 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:40.090 06:51:46 -- target/invalid.sh@14 -- # target=foobar 00:11:40.090 06:51:46 -- target/invalid.sh@16 -- # RANDOM=0 00:11:40.090 06:51:46 -- target/invalid.sh@34 -- # nvmftestinit 00:11:40.090 06:51:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:40.090 06:51:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.090 06:51:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:40.090 06:51:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:40.090 06:51:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:40.090 06:51:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.090 06:51:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.090 06:51:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.090 06:51:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:40.090 06:51:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:40.090 06:51:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:40.090 06:51:46 -- common/autotest_common.sh@10 -- # set +x 00:11:41.992 06:51:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:41.992 06:51:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:41.992 06:51:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:41.992 06:51:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:41.992 06:51:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:41.992 06:51:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:41.992 06:51:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:41.992 06:51:48 -- nvmf/common.sh@294 -- # net_devs=() 00:11:41.992 06:51:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:41.992 06:51:48 -- nvmf/common.sh@295 -- # e810=() 00:11:41.992 06:51:48 -- nvmf/common.sh@295 -- # local -ga e810 00:11:41.992 06:51:48 -- nvmf/common.sh@296 -- # x722=() 00:11:41.992 06:51:48 -- nvmf/common.sh@296 -- # local -ga x722 00:11:41.992 06:51:48 -- nvmf/common.sh@297 -- # mlx=() 00:11:41.992 06:51:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:41.992 06:51:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.992 06:51:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.992 06:51:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.992 06:51:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.992 06:51:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.992 06:51:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.992 06:51:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.992 06:51:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.992 06:51:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.992 06:51:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.992 06:51:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.992 06:51:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:41.992 06:51:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:41.992 06:51:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:41.992 06:51:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:41.992 06:51:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:41.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:41.992 06:51:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:41.992 06:51:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:41.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:41.992 06:51:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:41.992 06:51:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:41.992 06:51:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:41.992 06:51:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.992 06:51:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:41.992 06:51:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.992 06:51:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:41.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:41.992 06:51:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.992 06:51:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:41.992 06:51:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.992 06:51:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:41.993 06:51:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.993 06:51:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:41.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:41.993 06:51:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.993 06:51:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:41.993 06:51:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:41.993 06:51:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:41.993 06:51:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:41.993 06:51:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:41.993 06:51:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.993 06:51:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.993 06:51:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.993 06:51:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:41.993 06:51:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.993 06:51:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.993 06:51:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:41.993 06:51:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.993 06:51:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.993 06:51:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:41.993 06:51:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:41.993 06:51:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.993 06:51:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.993 06:51:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.993 06:51:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.993 06:51:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:41.993 06:51:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.993 06:51:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.993 06:51:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.993 06:51:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:41.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:11:41.993 00:11:41.993 --- 10.0.0.2 ping statistics --- 00:11:41.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.993 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:11:41.993 06:51:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:11:41.993 00:11:41.993 --- 10.0.0.1 ping statistics --- 00:11:41.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.993 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:11:41.993 06:51:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.993 06:51:48 -- nvmf/common.sh@410 -- # return 0 00:11:41.993 06:51:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:41.993 06:51:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.993 06:51:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:41.993 06:51:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:41.993 06:51:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.993 06:51:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:41.993 06:51:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:41.993 06:51:48 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:41.993 06:51:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:41.993 06:51:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:41.993 06:51:48 -- common/autotest_common.sh@10 -- # set +x 00:11:41.993 06:51:48 -- nvmf/common.sh@469 -- # nvmfpid=2980145 00:11:41.993 06:51:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.993 06:51:48 -- nvmf/common.sh@470 -- # waitforlisten 2980145 00:11:41.993 06:51:48 -- common/autotest_common.sh@819 -- # '[' -z 2980145 ']' 00:11:41.993 06:51:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.993 06:51:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:41.993 06:51:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.993 06:51:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:41.993 06:51:48 -- common/autotest_common.sh@10 -- # set +x 00:11:41.993 [2024-05-12 06:51:48.889532] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:41.993 [2024-05-12 06:51:48.889615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.993 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.993 [2024-05-12 06:51:48.953531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.993 [2024-05-12 06:51:49.062354] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:41.993 [2024-05-12 06:51:49.062524] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.993 [2024-05-12 06:51:49.062541] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.993 [2024-05-12 06:51:49.062553] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.993 [2024-05-12 06:51:49.062614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.993 [2024-05-12 06:51:49.062676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.993 [2024-05-12 06:51:49.062737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.993 [2024-05-12 06:51:49.062741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.927 06:51:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:42.927 06:51:49 -- common/autotest_common.sh@852 -- # return 0 00:11:42.927 06:51:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:42.927 06:51:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:42.927 06:51:49 -- common/autotest_common.sh@10 -- # set +x 00:11:42.927 06:51:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.927 06:51:49 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:42.927 06:51:49 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20482 00:11:43.184 [2024-05-12 06:51:50.158118] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:43.184 06:51:50 -- target/invalid.sh@40 -- # out='request: 00:11:43.184 { 00:11:43.184 "nqn": "nqn.2016-06.io.spdk:cnode20482", 00:11:43.184 "tgt_name": "foobar", 00:11:43.184 "method": "nvmf_create_subsystem", 00:11:43.184 "req_id": 1 00:11:43.184 } 00:11:43.184 Got JSON-RPC error response 00:11:43.184 response: 00:11:43.184 { 00:11:43.184 "code": -32603, 00:11:43.184 "message": "Unable to find target foobar" 00:11:43.184 }' 00:11:43.184 06:51:50 -- target/invalid.sh@41 -- # [[ request: 00:11:43.184 { 00:11:43.184 "nqn": "nqn.2016-06.io.spdk:cnode20482", 00:11:43.184 "tgt_name": "foobar", 00:11:43.184 "method": "nvmf_create_subsystem", 00:11:43.184 "req_id": 1 00:11:43.184 } 00:11:43.184 Got JSON-RPC error response 00:11:43.184 response: 00:11:43.184 { 00:11:43.185 "code": -32603, 00:11:43.185 "message": "Unable to find target foobar" 00:11:43.185 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:43.185 06:51:50 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:43.185 06:51:50 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21895 00:11:43.442 [2024-05-12 06:51:50.398910] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21895: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:43.442 06:51:50 -- target/invalid.sh@45 -- # out='request: 00:11:43.442 { 00:11:43.442 "nqn": "nqn.2016-06.io.spdk:cnode21895", 00:11:43.442 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:43.442 "method": "nvmf_create_subsystem", 00:11:43.442 "req_id": 1 00:11:43.442 } 00:11:43.442 Got JSON-RPC error response 00:11:43.442 response: 00:11:43.442 { 00:11:43.442 "code": -32602, 00:11:43.442 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:43.442 }' 00:11:43.442 06:51:50 -- target/invalid.sh@46 -- # [[ request: 00:11:43.442 { 00:11:43.442 "nqn": "nqn.2016-06.io.spdk:cnode21895", 00:11:43.442 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:43.442 "method": "nvmf_create_subsystem", 00:11:43.442 "req_id": 1 00:11:43.442 } 00:11:43.442 Got JSON-RPC error response 00:11:43.442 response: 00:11:43.442 { 00:11:43.442 "code": -32602, 00:11:43.442 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:43.442 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:43.442 06:51:50 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:43.442 06:51:50 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23515 00:11:43.700 [2024-05-12 06:51:50.639673] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23515: invalid model number 'SPDK_Controller' 00:11:43.700 06:51:50 -- target/invalid.sh@50 -- # out='request: 00:11:43.700 { 00:11:43.700 "nqn": "nqn.2016-06.io.spdk:cnode23515", 00:11:43.700 "model_number": "SPDK_Controller\u001f", 00:11:43.700 "method": "nvmf_create_subsystem", 00:11:43.700 "req_id": 1 00:11:43.700 } 00:11:43.700 Got JSON-RPC error response 00:11:43.700 response: 00:11:43.700 { 00:11:43.700 "code": -32602, 00:11:43.700 "message": "Invalid MN SPDK_Controller\u001f" 00:11:43.700 }' 00:11:43.700 06:51:50 -- target/invalid.sh@51 -- # [[ request: 00:11:43.700 { 00:11:43.700 "nqn": "nqn.2016-06.io.spdk:cnode23515", 00:11:43.700 "model_number": "SPDK_Controller\u001f", 00:11:43.700 "method": "nvmf_create_subsystem", 00:11:43.700 "req_id": 1 00:11:43.700 } 00:11:43.700 Got JSON-RPC error response 00:11:43.700 response: 00:11:43.700 { 00:11:43.700 "code": -32602, 00:11:43.700 "message": "Invalid MN SPDK_Controller\u001f" 00:11:43.700 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:43.700 06:51:50 -- target/invalid.sh@54 -- # gen_random_s 21 00:11:43.700 06:51:50 -- target/invalid.sh@19 -- # local length=21 ll 00:11:43.700 06:51:50 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:43.700 06:51:50 -- target/invalid.sh@21 -- # local chars 00:11:43.700 06:51:50 -- target/invalid.sh@22 -- # local string 00:11:43.700 06:51:50 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:43.700 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # printf %x 102 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # string+=f 00:11:43.700 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.700 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # printf %x 50 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # string+=2 00:11:43.700 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.700 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # printf %x 76 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # string+=L 00:11:43.700 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.700 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # printf %x 93 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # string+=']' 00:11:43.700 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.700 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # printf %x 111 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:43.700 06:51:50 -- target/invalid.sh@25 -- # string+=o 00:11:43.700 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 51 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+=3 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 78 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+=N 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 108 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+=l 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 46 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+=. 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 92 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+='\' 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 123 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+='{' 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 84 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+=T 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 125 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+='}' 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 100 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+=d 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 91 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+='[' 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 66 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+=B 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 76 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+=L 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 46 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+=. 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 53 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+=5 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 45 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+=- 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # printf %x 116 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:43.701 06:51:50 -- target/invalid.sh@25 -- # string+=t 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.701 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.701 06:51:50 -- target/invalid.sh@28 -- # [[ f == \- ]] 00:11:43.701 06:51:50 -- target/invalid.sh@31 -- # echo 'f2L]o3Nl.\{T}d[BL.5-t' 00:11:43.701 06:51:50 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'f2L]o3Nl.\{T}d[BL.5-t' nqn.2016-06.io.spdk:cnode21806 00:11:43.960 [2024-05-12 06:51:50.956702] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21806: invalid serial number 'f2L]o3Nl.\{T}d[BL.5-t' 00:11:43.960 06:51:50 -- target/invalid.sh@54 -- # out='request: 00:11:43.960 { 00:11:43.960 "nqn": "nqn.2016-06.io.spdk:cnode21806", 00:11:43.960 "serial_number": "f2L]o3Nl.\\{T}d[BL.5-t", 00:11:43.960 "method": "nvmf_create_subsystem", 00:11:43.960 "req_id": 1 00:11:43.960 } 00:11:43.960 Got JSON-RPC error response 00:11:43.960 response: 00:11:43.960 { 00:11:43.960 "code": -32602, 00:11:43.960 "message": "Invalid SN f2L]o3Nl.\\{T}d[BL.5-t" 00:11:43.960 }' 00:11:43.960 06:51:50 -- target/invalid.sh@55 -- # [[ request: 00:11:43.960 { 00:11:43.960 "nqn": "nqn.2016-06.io.spdk:cnode21806", 00:11:43.960 "serial_number": "f2L]o3Nl.\\{T}d[BL.5-t", 00:11:43.960 "method": "nvmf_create_subsystem", 00:11:43.960 "req_id": 1 00:11:43.960 } 00:11:43.960 Got JSON-RPC error response 00:11:43.960 response: 00:11:43.960 { 00:11:43.960 "code": -32602, 00:11:43.960 "message": "Invalid SN f2L]o3Nl.\\{T}d[BL.5-t" 00:11:43.960 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:43.960 06:51:50 -- target/invalid.sh@58 -- # gen_random_s 41 00:11:43.960 06:51:50 -- target/invalid.sh@19 -- # local length=41 ll 00:11:43.960 06:51:50 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:43.960 06:51:50 -- target/invalid.sh@21 -- # local chars 00:11:43.960 06:51:50 -- target/invalid.sh@22 -- # local string 00:11:43.960 06:51:50 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:43.960 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # printf %x 95 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # string+=_ 00:11:43.960 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # printf %x 54 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # string+=6 00:11:43.960 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # printf %x 58 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # string+=: 00:11:43.960 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # printf %x 51 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # string+=3 00:11:43.960 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # printf %x 82 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # string+=R 00:11:43.960 06:51:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:50 -- target/invalid.sh@25 -- # printf %x 75 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=K 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 98 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=b 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 53 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=5 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 102 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=f 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 50 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=2 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 46 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=. 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 115 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=s 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 111 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=o 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 79 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=O 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 116 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=t 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 104 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=h 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 70 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=F 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 48 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=0 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 126 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+='~' 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 49 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=1 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 63 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+='?' 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 62 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+='>' 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 83 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=S 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 96 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+='`' 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 56 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=8 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 115 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=s 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.960 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # printf %x 84 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:43.960 06:51:51 -- target/invalid.sh@25 -- # string+=T 00:11:43.961 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.961 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.961 06:51:51 -- target/invalid.sh@25 -- # printf %x 55 00:11:43.961 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:43.961 06:51:51 -- target/invalid.sh@25 -- # string+=7 00:11:43.961 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.961 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.961 06:51:51 -- target/invalid.sh@25 -- # printf %x 108 00:11:43.961 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:43.961 06:51:51 -- target/invalid.sh@25 -- # string+=l 00:11:43.961 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.961 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.961 06:51:51 -- target/invalid.sh@25 -- # printf %x 97 00:11:43.961 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:43.961 06:51:51 -- target/invalid.sh@25 -- # string+=a 00:11:43.961 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.961 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.961 06:51:51 -- target/invalid.sh@25 -- # printf %x 119 00:11:43.961 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:43.961 06:51:51 -- target/invalid.sh@25 -- # string+=w 00:11:44.218 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:44.218 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:44.218 06:51:51 -- target/invalid.sh@25 -- # printf %x 78 00:11:44.218 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:44.218 06:51:51 -- target/invalid.sh@25 -- # string+=N 00:11:44.218 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:44.218 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:44.218 06:51:51 -- target/invalid.sh@25 -- # printf %x 91 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # string+='[' 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # printf %x 81 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # string+=Q 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # printf %x 85 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # string+=U 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # printf %x 77 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # string+=M 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # printf %x 71 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # string+=G 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # printf %x 120 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # string+=x 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # printf %x 46 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # string+=. 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # printf %x 51 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # string+=3 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # printf %x 103 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:44.219 06:51:51 -- target/invalid.sh@25 -- # string+=g 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:44.219 06:51:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:44.219 06:51:51 -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:11:44.219 06:51:51 -- target/invalid.sh@31 -- # echo '_6:3RKb5f2.soOthF0~1?>S`8sT7lawN[QUMGx.3g' 00:11:44.219 06:51:51 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '_6:3RKb5f2.soOthF0~1?>S`8sT7lawN[QUMGx.3g' nqn.2016-06.io.spdk:cnode13148 00:11:44.477 [2024-05-12 06:51:51.386128] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13148: invalid model number '_6:3RKb5f2.soOthF0~1?>S`8sT7lawN[QUMGx.3g' 00:11:44.477 06:51:51 -- target/invalid.sh@58 -- # out='request: 00:11:44.477 { 00:11:44.477 "nqn": "nqn.2016-06.io.spdk:cnode13148", 00:11:44.477 "model_number": "_6:3RKb5f2.soOthF0~1?>S`8sT7lawN[QUMGx.3g", 00:11:44.477 "method": "nvmf_create_subsystem", 00:11:44.477 "req_id": 1 00:11:44.477 } 00:11:44.477 Got JSON-RPC error response 00:11:44.477 response: 00:11:44.477 { 00:11:44.477 "code": -32602, 00:11:44.477 "message": "Invalid MN _6:3RKb5f2.soOthF0~1?>S`8sT7lawN[QUMGx.3g" 00:11:44.477 }' 00:11:44.477 06:51:51 -- target/invalid.sh@59 -- # [[ request: 00:11:44.477 { 00:11:44.477 "nqn": "nqn.2016-06.io.spdk:cnode13148", 00:11:44.477 "model_number": "_6:3RKb5f2.soOthF0~1?>S`8sT7lawN[QUMGx.3g", 00:11:44.477 "method": "nvmf_create_subsystem", 00:11:44.477 "req_id": 1 00:11:44.477 } 00:11:44.477 Got JSON-RPC error response 00:11:44.477 response: 00:11:44.477 { 00:11:44.477 "code": -32602, 00:11:44.477 "message": "Invalid MN _6:3RKb5f2.soOthF0~1?>S`8sT7lawN[QUMGx.3g" 00:11:44.477 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:44.477 06:51:51 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:44.735 [2024-05-12 06:51:51.618962] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.735 06:51:51 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:44.992 06:51:51 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:44.992 06:51:51 -- target/invalid.sh@67 -- # echo '' 00:11:44.992 06:51:51 -- target/invalid.sh@67 -- # head -n 1 00:11:44.993 06:51:51 -- target/invalid.sh@67 -- # IP= 00:11:44.993 06:51:51 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:44.993 [2024-05-12 06:51:52.104651] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:45.251 06:51:52 -- target/invalid.sh@69 -- # out='request: 00:11:45.251 { 00:11:45.251 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:45.251 "listen_address": { 00:11:45.251 "trtype": "tcp", 00:11:45.251 "traddr": "", 00:11:45.251 "trsvcid": "4421" 00:11:45.251 }, 00:11:45.251 "method": "nvmf_subsystem_remove_listener", 00:11:45.251 "req_id": 1 00:11:45.251 } 00:11:45.251 Got JSON-RPC error response 00:11:45.251 response: 00:11:45.251 { 00:11:45.251 "code": -32602, 00:11:45.251 "message": "Invalid parameters" 00:11:45.251 }' 00:11:45.251 06:51:52 -- target/invalid.sh@70 -- # [[ request: 00:11:45.251 { 00:11:45.251 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:45.251 "listen_address": { 00:11:45.251 "trtype": "tcp", 00:11:45.251 "traddr": "", 00:11:45.251 "trsvcid": "4421" 00:11:45.251 }, 00:11:45.251 "method": "nvmf_subsystem_remove_listener", 00:11:45.251 "req_id": 1 00:11:45.251 } 00:11:45.251 Got JSON-RPC error response 00:11:45.251 response: 00:11:45.251 { 00:11:45.251 "code": -32602, 00:11:45.251 "message": "Invalid parameters" 00:11:45.251 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:45.251 06:51:52 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29209 -i 0 00:11:45.251 [2024-05-12 06:51:52.357445] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29209: invalid cntlid range [0-65519] 00:11:45.251 06:51:52 -- target/invalid.sh@73 -- # out='request: 00:11:45.251 { 00:11:45.251 "nqn": "nqn.2016-06.io.spdk:cnode29209", 00:11:45.251 "min_cntlid": 0, 00:11:45.251 "method": "nvmf_create_subsystem", 00:11:45.251 "req_id": 1 00:11:45.251 } 00:11:45.251 Got JSON-RPC error response 00:11:45.251 response: 00:11:45.251 { 00:11:45.251 "code": -32602, 00:11:45.251 "message": "Invalid cntlid range [0-65519]" 00:11:45.251 }' 00:11:45.251 06:51:52 -- target/invalid.sh@74 -- # [[ request: 00:11:45.251 { 00:11:45.251 "nqn": "nqn.2016-06.io.spdk:cnode29209", 00:11:45.251 "min_cntlid": 0, 00:11:45.251 "method": "nvmf_create_subsystem", 00:11:45.251 "req_id": 1 00:11:45.251 } 00:11:45.251 Got JSON-RPC error response 00:11:45.251 response: 00:11:45.251 { 00:11:45.251 "code": -32602, 00:11:45.251 "message": "Invalid cntlid range [0-65519]" 00:11:45.251 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:45.251 06:51:52 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30277 -i 65520 00:11:45.508 [2024-05-12 06:51:52.594263] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30277: invalid cntlid range [65520-65519] 00:11:45.508 06:51:52 -- target/invalid.sh@75 -- # out='request: 00:11:45.508 { 00:11:45.508 "nqn": "nqn.2016-06.io.spdk:cnode30277", 00:11:45.508 "min_cntlid": 65520, 00:11:45.508 "method": "nvmf_create_subsystem", 00:11:45.508 "req_id": 1 00:11:45.508 } 00:11:45.508 Got JSON-RPC error response 00:11:45.508 response: 00:11:45.508 { 00:11:45.508 "code": -32602, 00:11:45.508 "message": "Invalid cntlid range [65520-65519]" 00:11:45.508 }' 00:11:45.508 06:51:52 -- target/invalid.sh@76 -- # [[ request: 00:11:45.508 { 00:11:45.508 "nqn": "nqn.2016-06.io.spdk:cnode30277", 00:11:45.508 "min_cntlid": 65520, 00:11:45.508 "method": "nvmf_create_subsystem", 00:11:45.508 "req_id": 1 00:11:45.508 } 00:11:45.508 Got JSON-RPC error response 00:11:45.508 response: 00:11:45.508 { 00:11:45.508 "code": -32602, 00:11:45.508 "message": "Invalid cntlid range [65520-65519]" 00:11:45.508 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:45.508 06:51:52 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode79 -I 0 00:11:45.804 [2024-05-12 06:51:52.835108] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode79: invalid cntlid range [1-0] 00:11:45.804 06:51:52 -- target/invalid.sh@77 -- # out='request: 00:11:45.804 { 00:11:45.804 "nqn": "nqn.2016-06.io.spdk:cnode79", 00:11:45.804 "max_cntlid": 0, 00:11:45.804 "method": "nvmf_create_subsystem", 00:11:45.804 "req_id": 1 00:11:45.804 } 00:11:45.804 Got JSON-RPC error response 00:11:45.804 response: 00:11:45.804 { 00:11:45.804 "code": -32602, 00:11:45.804 "message": "Invalid cntlid range [1-0]" 00:11:45.804 }' 00:11:45.804 06:51:52 -- target/invalid.sh@78 -- # [[ request: 00:11:45.804 { 00:11:45.804 "nqn": "nqn.2016-06.io.spdk:cnode79", 00:11:45.804 "max_cntlid": 0, 00:11:45.804 "method": "nvmf_create_subsystem", 00:11:45.804 "req_id": 1 00:11:45.804 } 00:11:45.804 Got JSON-RPC error response 00:11:45.804 response: 00:11:45.804 { 00:11:45.804 "code": -32602, 00:11:45.804 "message": "Invalid cntlid range [1-0]" 00:11:45.804 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:45.804 06:51:52 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19037 -I 65520 00:11:46.082 [2024-05-12 06:51:53.063876] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19037: invalid cntlid range [1-65520] 00:11:46.082 06:51:53 -- target/invalid.sh@79 -- # out='request: 00:11:46.082 { 00:11:46.082 "nqn": "nqn.2016-06.io.spdk:cnode19037", 00:11:46.082 "max_cntlid": 65520, 00:11:46.082 "method": "nvmf_create_subsystem", 00:11:46.082 "req_id": 1 00:11:46.082 } 00:11:46.082 Got JSON-RPC error response 00:11:46.082 response: 00:11:46.082 { 00:11:46.082 "code": -32602, 00:11:46.082 "message": "Invalid cntlid range [1-65520]" 00:11:46.082 }' 00:11:46.082 06:51:53 -- target/invalid.sh@80 -- # [[ request: 00:11:46.082 { 00:11:46.082 "nqn": "nqn.2016-06.io.spdk:cnode19037", 00:11:46.082 "max_cntlid": 65520, 00:11:46.082 "method": "nvmf_create_subsystem", 00:11:46.082 "req_id": 1 00:11:46.082 } 00:11:46.082 Got JSON-RPC error response 00:11:46.082 response: 00:11:46.082 { 00:11:46.082 "code": -32602, 00:11:46.082 "message": "Invalid cntlid range [1-65520]" 00:11:46.082 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:46.082 06:51:53 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14337 -i 6 -I 5 00:11:46.340 [2024-05-12 06:51:53.308692] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14337: invalid cntlid range [6-5] 00:11:46.340 06:51:53 -- target/invalid.sh@83 -- # out='request: 00:11:46.340 { 00:11:46.340 "nqn": "nqn.2016-06.io.spdk:cnode14337", 00:11:46.340 "min_cntlid": 6, 00:11:46.340 "max_cntlid": 5, 00:11:46.340 "method": "nvmf_create_subsystem", 00:11:46.340 "req_id": 1 00:11:46.340 } 00:11:46.340 Got JSON-RPC error response 00:11:46.340 response: 00:11:46.340 { 00:11:46.340 "code": -32602, 00:11:46.340 "message": "Invalid cntlid range [6-5]" 00:11:46.340 }' 00:11:46.340 06:51:53 -- target/invalid.sh@84 -- # [[ request: 00:11:46.340 { 00:11:46.340 "nqn": "nqn.2016-06.io.spdk:cnode14337", 00:11:46.340 "min_cntlid": 6, 00:11:46.340 "max_cntlid": 5, 00:11:46.340 "method": "nvmf_create_subsystem", 00:11:46.340 "req_id": 1 00:11:46.340 } 00:11:46.340 Got JSON-RPC error response 00:11:46.340 response: 00:11:46.340 { 00:11:46.340 "code": -32602, 00:11:46.340 "message": "Invalid cntlid range [6-5]" 00:11:46.340 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:46.340 06:51:53 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:46.340 06:51:53 -- target/invalid.sh@87 -- # out='request: 00:11:46.340 { 00:11:46.340 "name": "foobar", 00:11:46.340 "method": "nvmf_delete_target", 00:11:46.340 "req_id": 1 00:11:46.340 } 00:11:46.340 Got JSON-RPC error response 00:11:46.340 response: 00:11:46.340 { 00:11:46.340 "code": -32602, 00:11:46.340 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:46.340 }' 00:11:46.340 06:51:53 -- target/invalid.sh@88 -- # [[ request: 00:11:46.340 { 00:11:46.340 "name": "foobar", 00:11:46.340 "method": "nvmf_delete_target", 00:11:46.340 "req_id": 1 00:11:46.340 } 00:11:46.340 Got JSON-RPC error response 00:11:46.340 response: 00:11:46.340 { 00:11:46.340 "code": -32602, 00:11:46.340 "message": "The specified target doesn't exist, cannot delete it." 00:11:46.340 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:46.340 06:51:53 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:46.340 06:51:53 -- target/invalid.sh@91 -- # nvmftestfini 00:11:46.340 06:51:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:46.340 06:51:53 -- nvmf/common.sh@116 -- # sync 00:11:46.340 06:51:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:46.340 06:51:53 -- nvmf/common.sh@119 -- # set +e 00:11:46.340 06:51:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:46.340 06:51:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:46.340 rmmod nvme_tcp 00:11:46.340 rmmod nvme_fabrics 00:11:46.598 rmmod nvme_keyring 00:11:46.598 06:51:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:46.598 06:51:53 -- nvmf/common.sh@123 -- # set -e 00:11:46.598 06:51:53 -- nvmf/common.sh@124 -- # return 0 00:11:46.598 06:51:53 -- nvmf/common.sh@477 -- # '[' -n 2980145 ']' 00:11:46.598 06:51:53 -- nvmf/common.sh@478 -- # killprocess 2980145 00:11:46.598 06:51:53 -- common/autotest_common.sh@926 -- # '[' -z 2980145 ']' 00:11:46.598 06:51:53 -- common/autotest_common.sh@930 -- # kill -0 2980145 00:11:46.598 06:51:53 -- common/autotest_common.sh@931 -- # uname 00:11:46.598 06:51:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:46.598 06:51:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2980145 00:11:46.598 06:51:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:46.598 06:51:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:46.598 06:51:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2980145' 00:11:46.598 killing process with pid 2980145 00:11:46.598 06:51:53 -- common/autotest_common.sh@945 -- # kill 2980145 00:11:46.598 06:51:53 -- common/autotest_common.sh@950 -- # wait 2980145 00:11:46.857 06:51:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:46.857 06:51:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:46.857 06:51:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:46.857 06:51:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.857 06:51:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:46.857 06:51:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.857 06:51:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.857 06:51:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.761 06:51:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:48.761 00:11:48.761 real 0m9.144s 00:11:48.761 user 0m22.541s 00:11:48.761 sys 0m2.323s 00:11:48.761 06:51:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.761 06:51:55 -- common/autotest_common.sh@10 -- # set +x 00:11:48.761 ************************************ 00:11:48.761 END TEST nvmf_invalid 00:11:48.761 ************************************ 00:11:48.761 06:51:55 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:48.761 06:51:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:48.761 06:51:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:48.761 06:51:55 -- common/autotest_common.sh@10 -- # set +x 00:11:48.761 ************************************ 00:11:48.761 START TEST nvmf_abort 00:11:48.761 ************************************ 00:11:48.761 06:51:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:49.019 * Looking for test storage... 00:11:49.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.019 06:51:55 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.019 06:51:55 -- nvmf/common.sh@7 -- # uname -s 00:11:49.019 06:51:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.019 06:51:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.019 06:51:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.019 06:51:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.019 06:51:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.019 06:51:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.019 06:51:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.019 06:51:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.019 06:51:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.019 06:51:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.019 06:51:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:49.019 06:51:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:49.019 06:51:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.019 06:51:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.019 06:51:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.019 06:51:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.019 06:51:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.019 06:51:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.019 06:51:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.019 06:51:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.019 06:51:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.019 06:51:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.019 06:51:55 -- paths/export.sh@5 -- # export PATH 00:11:49.019 06:51:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.019 06:51:55 -- nvmf/common.sh@46 -- # : 0 00:11:49.019 06:51:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:49.019 06:51:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:49.019 06:51:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:49.019 06:51:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.020 06:51:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.020 06:51:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:49.020 06:51:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:49.020 06:51:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:49.020 06:51:55 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.020 06:51:55 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:49.020 06:51:55 -- target/abort.sh@14 -- # nvmftestinit 00:11:49.020 06:51:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:49.020 06:51:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.020 06:51:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:49.020 06:51:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:49.020 06:51:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:49.020 06:51:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.020 06:51:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.020 06:51:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.020 06:51:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:49.020 06:51:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:49.020 06:51:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:49.020 06:51:55 -- common/autotest_common.sh@10 -- # set +x 00:11:50.922 06:51:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:50.922 06:51:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:50.922 06:51:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:50.922 06:51:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:50.922 06:51:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:50.922 06:51:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:50.922 06:51:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:50.922 06:51:57 -- nvmf/common.sh@294 -- # net_devs=() 00:11:50.922 06:51:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:50.922 06:51:57 -- nvmf/common.sh@295 -- # e810=() 00:11:50.922 06:51:57 -- nvmf/common.sh@295 -- # local -ga e810 00:11:50.922 06:51:57 -- nvmf/common.sh@296 -- # x722=() 00:11:50.922 06:51:57 -- nvmf/common.sh@296 -- # local -ga x722 00:11:50.922 06:51:57 -- nvmf/common.sh@297 -- # mlx=() 00:11:50.922 06:51:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:50.922 06:51:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.922 06:51:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.922 06:51:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.922 06:51:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.922 06:51:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.922 06:51:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.922 06:51:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.922 06:51:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.922 06:51:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.922 06:51:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.922 06:51:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.922 06:51:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:50.922 06:51:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:50.922 06:51:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:50.922 06:51:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:50.922 06:51:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:50.922 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:50.922 06:51:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:50.922 06:51:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:50.922 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:50.922 06:51:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:50.922 06:51:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:50.922 06:51:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:50.922 06:51:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.922 06:51:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:50.922 06:51:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.922 06:51:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:50.922 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:50.922 06:51:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.922 06:51:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:50.922 06:51:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.922 06:51:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:50.922 06:51:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.922 06:51:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:50.922 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:50.923 06:51:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.923 06:51:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:50.923 06:51:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:50.923 06:51:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:50.923 06:51:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:50.923 06:51:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:50.923 06:51:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.923 06:51:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.923 06:51:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.923 06:51:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:50.923 06:51:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.923 06:51:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.923 06:51:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:50.923 06:51:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.923 06:51:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.923 06:51:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:50.923 06:51:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:50.923 06:51:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.923 06:51:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.923 06:51:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.923 06:51:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.923 06:51:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:50.923 06:51:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.923 06:51:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.923 06:51:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.923 06:51:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:50.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:11:50.923 00:11:50.923 --- 10.0.0.2 ping statistics --- 00:11:50.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.923 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:50.923 06:51:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:11:51.182 00:11:51.182 --- 10.0.0.1 ping statistics --- 00:11:51.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.182 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:11:51.182 06:51:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.182 06:51:58 -- nvmf/common.sh@410 -- # return 0 00:11:51.182 06:51:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:51.182 06:51:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.182 06:51:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:51.182 06:51:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:51.182 06:51:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.182 06:51:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:51.182 06:51:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:51.182 06:51:58 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:51.182 06:51:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:51.182 06:51:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:51.182 06:51:58 -- common/autotest_common.sh@10 -- # set +x 00:11:51.182 06:51:58 -- nvmf/common.sh@469 -- # nvmfpid=2982873 00:11:51.182 06:51:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:51.182 06:51:58 -- nvmf/common.sh@470 -- # waitforlisten 2982873 00:11:51.182 06:51:58 -- common/autotest_common.sh@819 -- # '[' -z 2982873 ']' 00:11:51.182 06:51:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.182 06:51:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:51.182 06:51:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.182 06:51:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:51.182 06:51:58 -- common/autotest_common.sh@10 -- # set +x 00:11:51.182 [2024-05-12 06:51:58.111597] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:51.182 [2024-05-12 06:51:58.111682] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.182 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.182 [2024-05-12 06:51:58.178688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:51.182 [2024-05-12 06:51:58.291644] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:51.182 [2024-05-12 06:51:58.291799] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.182 [2024-05-12 06:51:58.291817] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.182 [2024-05-12 06:51:58.291830] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.182 [2024-05-12 06:51:58.291902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.182 [2024-05-12 06:51:58.291963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.182 [2024-05-12 06:51:58.291966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.115 06:51:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:52.115 06:51:59 -- common/autotest_common.sh@852 -- # return 0 00:11:52.115 06:51:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:52.115 06:51:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:52.115 06:51:59 -- common/autotest_common.sh@10 -- # set +x 00:11:52.115 06:51:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.115 06:51:59 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:52.115 06:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:52.115 06:51:59 -- common/autotest_common.sh@10 -- # set +x 00:11:52.115 [2024-05-12 06:51:59.149891] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.115 06:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:52.115 06:51:59 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:52.115 06:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:52.115 06:51:59 -- common/autotest_common.sh@10 -- # set +x 00:11:52.115 Malloc0 00:11:52.115 06:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:52.116 06:51:59 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:52.116 06:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:52.116 06:51:59 -- common/autotest_common.sh@10 -- # set +x 00:11:52.116 Delay0 00:11:52.116 06:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:52.116 06:51:59 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:52.116 06:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:52.116 06:51:59 -- common/autotest_common.sh@10 -- # set +x 00:11:52.116 06:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:52.116 06:51:59 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:52.116 06:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:52.116 06:51:59 -- common/autotest_common.sh@10 -- # set +x 00:11:52.116 06:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:52.116 06:51:59 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:52.116 06:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:52.116 06:51:59 -- common/autotest_common.sh@10 -- # set +x 00:11:52.116 [2024-05-12 06:51:59.221552] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.116 06:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:52.116 06:51:59 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:52.116 06:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:52.116 06:51:59 -- common/autotest_common.sh@10 -- # set +x 00:11:52.116 06:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:52.116 06:51:59 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:52.373 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.373 [2024-05-12 06:51:59.287121] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:54.272 Initializing NVMe Controllers 00:11:54.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:54.272 controller IO queue size 128 less than required 00:11:54.272 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:54.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:54.272 Initialization complete. Launching workers. 00:11:54.272 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33024 00:11:54.272 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33089, failed to submit 62 00:11:54.272 success 33024, unsuccess 65, failed 0 00:11:54.272 06:52:01 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:54.272 06:52:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:54.272 06:52:01 -- common/autotest_common.sh@10 -- # set +x 00:11:54.272 06:52:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:54.272 06:52:01 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:54.272 06:52:01 -- target/abort.sh@38 -- # nvmftestfini 00:11:54.272 06:52:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:54.272 06:52:01 -- nvmf/common.sh@116 -- # sync 00:11:54.272 06:52:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:54.272 06:52:01 -- nvmf/common.sh@119 -- # set +e 00:11:54.272 06:52:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:54.272 06:52:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:54.272 rmmod nvme_tcp 00:11:54.530 rmmod nvme_fabrics 00:11:54.530 rmmod nvme_keyring 00:11:54.530 06:52:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:54.530 06:52:01 -- nvmf/common.sh@123 -- # set -e 00:11:54.530 06:52:01 -- nvmf/common.sh@124 -- # return 0 00:11:54.530 06:52:01 -- nvmf/common.sh@477 -- # '[' -n 2982873 ']' 00:11:54.530 06:52:01 -- nvmf/common.sh@478 -- # killprocess 2982873 00:11:54.530 06:52:01 -- common/autotest_common.sh@926 -- # '[' -z 2982873 ']' 00:11:54.530 06:52:01 -- common/autotest_common.sh@930 -- # kill -0 2982873 00:11:54.530 06:52:01 -- common/autotest_common.sh@931 -- # uname 00:11:54.530 06:52:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:54.530 06:52:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2982873 00:11:54.530 06:52:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:54.530 06:52:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:54.530 06:52:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2982873' 00:11:54.530 killing process with pid 2982873 00:11:54.530 06:52:01 -- common/autotest_common.sh@945 -- # kill 2982873 00:11:54.530 06:52:01 -- common/autotest_common.sh@950 -- # wait 2982873 00:11:54.799 06:52:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:54.799 06:52:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:54.799 06:52:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:54.799 06:52:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.799 06:52:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:54.799 06:52:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.799 06:52:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.799 06:52:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.704 06:52:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:56.704 00:11:56.704 real 0m7.959s 00:11:56.704 user 0m12.783s 00:11:56.704 sys 0m2.533s 00:11:56.704 06:52:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.704 06:52:03 -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 ************************************ 00:11:56.704 END TEST nvmf_abort 00:11:56.704 ************************************ 00:11:56.962 06:52:03 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:56.962 06:52:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:56.962 06:52:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:56.962 06:52:03 -- common/autotest_common.sh@10 -- # set +x 00:11:56.962 ************************************ 00:11:56.962 START TEST nvmf_ns_hotplug_stress 00:11:56.962 ************************************ 00:11:56.962 06:52:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:56.962 * Looking for test storage... 00:11:56.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.962 06:52:03 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.962 06:52:03 -- nvmf/common.sh@7 -- # uname -s 00:11:56.962 06:52:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.962 06:52:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.962 06:52:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.962 06:52:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.962 06:52:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.962 06:52:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.962 06:52:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.962 06:52:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.962 06:52:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.962 06:52:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.962 06:52:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.962 06:52:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.962 06:52:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.962 06:52:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.962 06:52:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.962 06:52:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.962 06:52:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.962 06:52:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.962 06:52:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.962 06:52:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.962 06:52:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.962 06:52:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.962 06:52:03 -- paths/export.sh@5 -- # export PATH 00:11:56.962 06:52:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.962 06:52:03 -- nvmf/common.sh@46 -- # : 0 00:11:56.962 06:52:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:56.962 06:52:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:56.962 06:52:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:56.962 06:52:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.962 06:52:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.962 06:52:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:56.962 06:52:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:56.962 06:52:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:56.962 06:52:03 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.962 06:52:03 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:11:56.962 06:52:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:56.962 06:52:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.962 06:52:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:56.962 06:52:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:56.962 06:52:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:56.962 06:52:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.962 06:52:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.962 06:52:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.962 06:52:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:56.962 06:52:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:56.962 06:52:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:56.962 06:52:03 -- common/autotest_common.sh@10 -- # set +x 00:11:58.862 06:52:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:58.862 06:52:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:58.862 06:52:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:58.862 06:52:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:58.862 06:52:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:58.862 06:52:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:58.862 06:52:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:58.862 06:52:05 -- nvmf/common.sh@294 -- # net_devs=() 00:11:58.862 06:52:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:58.862 06:52:05 -- nvmf/common.sh@295 -- # e810=() 00:11:58.862 06:52:05 -- nvmf/common.sh@295 -- # local -ga e810 00:11:58.862 06:52:05 -- nvmf/common.sh@296 -- # x722=() 00:11:58.862 06:52:05 -- nvmf/common.sh@296 -- # local -ga x722 00:11:58.862 06:52:05 -- nvmf/common.sh@297 -- # mlx=() 00:11:58.862 06:52:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:58.862 06:52:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.862 06:52:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.862 06:52:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.862 06:52:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.862 06:52:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.862 06:52:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.862 06:52:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.862 06:52:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.862 06:52:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.862 06:52:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.862 06:52:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.862 06:52:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:58.862 06:52:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:58.862 06:52:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:58.862 06:52:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:58.862 06:52:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:58.862 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:58.862 06:52:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:58.862 06:52:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:58.862 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:58.862 06:52:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:58.862 06:52:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:58.862 06:52:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.862 06:52:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:58.862 06:52:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.862 06:52:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:58.862 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:58.862 06:52:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.862 06:52:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:58.862 06:52:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.862 06:52:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:58.862 06:52:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.862 06:52:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:58.862 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:58.862 06:52:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.862 06:52:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:58.862 06:52:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:58.862 06:52:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:58.862 06:52:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:58.862 06:52:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.862 06:52:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.862 06:52:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.862 06:52:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:58.862 06:52:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.862 06:52:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.862 06:52:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:58.862 06:52:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.862 06:52:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.862 06:52:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:58.862 06:52:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:58.862 06:52:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.862 06:52:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.862 06:52:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.862 06:52:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.862 06:52:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:58.862 06:52:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.121 06:52:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.121 06:52:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.121 06:52:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:59.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:11:59.121 00:11:59.121 --- 10.0.0.2 ping statistics --- 00:11:59.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.121 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:11:59.121 06:52:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:11:59.121 00:11:59.121 --- 10.0.0.1 ping statistics --- 00:11:59.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.121 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:11:59.121 06:52:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.121 06:52:06 -- nvmf/common.sh@410 -- # return 0 00:11:59.121 06:52:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:59.121 06:52:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.121 06:52:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:59.121 06:52:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:59.121 06:52:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.121 06:52:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:59.121 06:52:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:59.121 06:52:06 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:11:59.121 06:52:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:59.121 06:52:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:59.121 06:52:06 -- common/autotest_common.sh@10 -- # set +x 00:11:59.121 06:52:06 -- nvmf/common.sh@469 -- # nvmfpid=2985243 00:11:59.121 06:52:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:59.121 06:52:06 -- nvmf/common.sh@470 -- # waitforlisten 2985243 00:11:59.121 06:52:06 -- common/autotest_common.sh@819 -- # '[' -z 2985243 ']' 00:11:59.121 06:52:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.121 06:52:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:59.121 06:52:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.121 06:52:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:59.121 06:52:06 -- common/autotest_common.sh@10 -- # set +x 00:11:59.121 [2024-05-12 06:52:06.098283] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:59.121 [2024-05-12 06:52:06.098367] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.121 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.121 [2024-05-12 06:52:06.162575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:59.379 [2024-05-12 06:52:06.268854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:59.379 [2024-05-12 06:52:06.269002] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.379 [2024-05-12 06:52:06.269019] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.379 [2024-05-12 06:52:06.269031] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.379 [2024-05-12 06:52:06.269118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.379 [2024-05-12 06:52:06.269182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.379 [2024-05-12 06:52:06.269185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.945 06:52:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:59.945 06:52:07 -- common/autotest_common.sh@852 -- # return 0 00:11:59.945 06:52:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:59.945 06:52:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:59.945 06:52:07 -- common/autotest_common.sh@10 -- # set +x 00:12:00.202 06:52:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.202 06:52:07 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:12:00.202 06:52:07 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:00.460 [2024-05-12 06:52:07.340039] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.460 06:52:07 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:00.718 06:52:07 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.974 [2024-05-12 06:52:07.862761] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.974 06:52:07 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:01.231 06:52:08 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:01.507 Malloc0 00:12:01.507 06:52:08 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:01.780 Delay0 00:12:01.780 06:52:08 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.780 06:52:08 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:02.037 NULL1 00:12:02.037 06:52:09 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:02.294 06:52:09 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=2985685 00:12:02.294 06:52:09 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:02.294 06:52:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:02.294 06:52:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.294 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.669 Read completed with error (sct=0, sc=11) 00:12:03.669 06:52:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.669 06:52:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:12:03.670 06:52:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:03.928 true 00:12:03.928 06:52:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:03.928 06:52:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.863 06:52:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.120 06:52:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:12:05.120 06:52:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:05.381 true 00:12:05.381 06:52:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:05.381 06:52:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.640 06:52:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.896 06:52:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:12:05.896 06:52:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:06.154 true 00:12:06.154 06:52:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:06.154 06:52:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.412 06:52:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:06.412 06:52:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:12:06.412 06:52:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:06.670 true 00:12:06.670 06:52:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:06.670 06:52:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.045 06:52:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.045 06:52:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:12:08.045 06:52:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:08.303 true 00:12:08.303 06:52:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:08.303 06:52:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.560 06:52:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.818 06:52:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:12:08.818 06:52:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:09.076 true 00:12:09.076 06:52:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:09.076 06:52:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.011 06:52:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.268 06:52:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:12:10.268 06:52:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:10.526 true 00:12:10.526 06:52:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:10.526 06:52:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.784 06:52:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.042 06:52:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:12:11.042 06:52:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:11.301 true 00:12:11.301 06:52:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:11.301 06:52:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.235 06:52:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.493 06:52:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:12:12.493 06:52:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:12.750 true 00:12:12.750 06:52:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:12.750 06:52:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.008 06:52:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.266 06:52:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:12:13.266 06:52:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:13.524 true 00:12:13.524 06:52:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:13.524 06:52:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.458 06:52:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.458 06:52:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:12:14.458 06:52:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:14.716 true 00:12:14.716 06:52:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:14.716 06:52:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.974 06:52:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.283 06:52:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:12:15.283 06:52:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:15.540 true 00:12:15.540 06:52:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:15.540 06:52:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.473 06:52:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.730 06:52:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:12:16.730 06:52:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:16.988 true 00:12:16.988 06:52:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:16.988 06:52:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.245 06:52:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.503 06:52:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:12:17.503 06:52:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:17.760 true 00:12:17.760 06:52:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:17.760 06:52:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.018 06:52:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.276 06:52:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:12:18.276 06:52:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:18.534 true 00:12:18.534 06:52:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:18.534 06:52:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.468 06:52:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.725 06:52:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:12:19.725 06:52:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:19.983 true 00:12:19.983 06:52:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:19.983 06:52:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.241 06:52:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.499 06:52:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:12:20.499 06:52:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:20.757 true 00:12:20.757 06:52:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:20.757 06:52:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.691 06:52:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.948 06:52:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:12:21.948 06:52:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:22.206 true 00:12:22.206 06:52:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:22.206 06:52:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.464 06:52:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.722 06:52:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:12:22.722 06:52:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:22.980 true 00:12:22.980 06:52:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:22.980 06:52:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.912 06:52:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.169 06:52:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:12:24.169 06:52:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:24.426 true 00:12:24.426 06:52:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:24.426 06:52:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.684 06:52:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.942 06:52:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:12:24.942 06:52:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:25.200 true 00:12:25.200 06:52:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:25.200 06:52:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.132 06:52:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.390 06:52:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:12:26.390 06:52:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:26.648 true 00:12:26.648 06:52:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:26.648 06:52:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.905 06:52:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.163 06:52:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:12:27.163 06:52:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:27.420 true 00:12:27.420 06:52:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:27.421 06:52:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.678 06:52:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.936 06:52:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:12:27.936 06:52:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:28.193 true 00:12:28.194 06:52:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:28.194 06:52:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.162 06:52:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.420 06:52:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:12:29.420 06:52:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:29.678 true 00:12:29.678 06:52:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:29.678 06:52:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.936 06:52:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.194 06:52:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:12:30.194 06:52:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:30.452 true 00:12:30.452 06:52:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:30.452 06:52:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.387 06:52:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:31.387 06:52:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:12:31.387 06:52:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:31.645 true 00:12:31.645 06:52:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:31.645 06:52:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.210 06:52:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.210 06:52:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:12:32.210 06:52:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:32.468 true 00:12:32.468 06:52:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:32.468 06:52:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.402 Initializing NVMe Controllers 00:12:33.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:33.402 Controller IO queue size 128, less than required. 00:12:33.402 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:33.402 Controller IO queue size 128, less than required. 00:12:33.402 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:33.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:33.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:33.402 Initialization complete. Launching workers. 00:12:33.402 ======================================================== 00:12:33.402 Latency(us) 00:12:33.402 Device Information : IOPS MiB/s Average min max 00:12:33.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 844.96 0.41 78561.19 1789.87 1011701.13 00:12:33.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11117.57 5.43 11513.51 2332.47 357018.01 00:12:33.402 ======================================================== 00:12:33.402 Total : 11962.53 5.84 16249.33 1789.87 1011701.13 00:12:33.402 00:12:33.402 06:52:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.659 06:52:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:12:33.659 06:52:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:33.917 true 00:12:33.917 06:52:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2985685 00:12:33.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (2985685) - No such process 00:12:33.917 06:52:40 -- target/ns_hotplug_stress.sh@44 -- # wait 2985685 00:12:33.917 06:52:40 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:33.917 06:52:40 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:12:33.917 06:52:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:33.917 06:52:40 -- nvmf/common.sh@116 -- # sync 00:12:33.917 06:52:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:33.917 06:52:40 -- nvmf/common.sh@119 -- # set +e 00:12:33.917 06:52:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:33.917 06:52:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:33.918 rmmod nvme_tcp 00:12:33.918 rmmod nvme_fabrics 00:12:33.918 rmmod nvme_keyring 00:12:33.918 06:52:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:33.918 06:52:40 -- nvmf/common.sh@123 -- # set -e 00:12:33.918 06:52:40 -- nvmf/common.sh@124 -- # return 0 00:12:33.918 06:52:40 -- nvmf/common.sh@477 -- # '[' -n 2985243 ']' 00:12:33.918 06:52:40 -- nvmf/common.sh@478 -- # killprocess 2985243 00:12:33.918 06:52:40 -- common/autotest_common.sh@926 -- # '[' -z 2985243 ']' 00:12:33.918 06:52:40 -- common/autotest_common.sh@930 -- # kill -0 2985243 00:12:33.918 06:52:40 -- common/autotest_common.sh@931 -- # uname 00:12:33.918 06:52:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:33.918 06:52:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2985243 00:12:33.918 06:52:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:33.918 06:52:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:33.918 06:52:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2985243' 00:12:33.918 killing process with pid 2985243 00:12:33.918 06:52:40 -- common/autotest_common.sh@945 -- # kill 2985243 00:12:33.918 06:52:40 -- common/autotest_common.sh@950 -- # wait 2985243 00:12:34.176 06:52:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:34.176 06:52:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:34.176 06:52:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:34.176 06:52:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.176 06:52:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:34.176 06:52:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.176 06:52:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.176 06:52:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.704 06:52:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:36.704 00:12:36.704 real 0m39.457s 00:12:36.704 user 2m32.988s 00:12:36.704 sys 0m10.046s 00:12:36.704 06:52:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.704 06:52:43 -- common/autotest_common.sh@10 -- # set +x 00:12:36.704 ************************************ 00:12:36.705 END TEST nvmf_ns_hotplug_stress 00:12:36.705 ************************************ 00:12:36.705 06:52:43 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:36.705 06:52:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:36.705 06:52:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:36.705 06:52:43 -- common/autotest_common.sh@10 -- # set +x 00:12:36.705 ************************************ 00:12:36.705 START TEST nvmf_connect_stress 00:12:36.705 ************************************ 00:12:36.705 06:52:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:36.705 * Looking for test storage... 00:12:36.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.705 06:52:43 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.705 06:52:43 -- nvmf/common.sh@7 -- # uname -s 00:12:36.705 06:52:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.705 06:52:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.705 06:52:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.705 06:52:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.705 06:52:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.705 06:52:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.705 06:52:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.705 06:52:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.705 06:52:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.705 06:52:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.705 06:52:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.705 06:52:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.705 06:52:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.705 06:52:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.705 06:52:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.705 06:52:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.705 06:52:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.705 06:52:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.705 06:52:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.705 06:52:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.705 06:52:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.705 06:52:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.705 06:52:43 -- paths/export.sh@5 -- # export PATH 00:12:36.705 06:52:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.705 06:52:43 -- nvmf/common.sh@46 -- # : 0 00:12:36.705 06:52:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:36.705 06:52:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:36.705 06:52:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:36.705 06:52:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.705 06:52:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.705 06:52:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:36.705 06:52:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:36.705 06:52:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:36.705 06:52:43 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:36.705 06:52:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:36.705 06:52:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.705 06:52:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:36.705 06:52:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:36.705 06:52:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:36.705 06:52:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.705 06:52:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.705 06:52:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.705 06:52:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:36.705 06:52:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:36.705 06:52:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:36.705 06:52:43 -- common/autotest_common.sh@10 -- # set +x 00:12:38.605 06:52:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:38.605 06:52:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:38.605 06:52:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:38.605 06:52:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:38.605 06:52:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:38.605 06:52:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:38.605 06:52:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:38.605 06:52:45 -- nvmf/common.sh@294 -- # net_devs=() 00:12:38.605 06:52:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:38.605 06:52:45 -- nvmf/common.sh@295 -- # e810=() 00:12:38.605 06:52:45 -- nvmf/common.sh@295 -- # local -ga e810 00:12:38.605 06:52:45 -- nvmf/common.sh@296 -- # x722=() 00:12:38.605 06:52:45 -- nvmf/common.sh@296 -- # local -ga x722 00:12:38.605 06:52:45 -- nvmf/common.sh@297 -- # mlx=() 00:12:38.605 06:52:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:38.605 06:52:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.605 06:52:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.605 06:52:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.605 06:52:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.605 06:52:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.605 06:52:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.605 06:52:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.605 06:52:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.605 06:52:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.605 06:52:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.605 06:52:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.605 06:52:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:38.605 06:52:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:38.605 06:52:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:38.605 06:52:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:38.605 06:52:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:38.605 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:38.605 06:52:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:38.605 06:52:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:38.605 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:38.605 06:52:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:38.605 06:52:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:38.605 06:52:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.605 06:52:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:38.605 06:52:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.605 06:52:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:38.605 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:38.605 06:52:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.605 06:52:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:38.605 06:52:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.605 06:52:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:38.605 06:52:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.605 06:52:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:38.605 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:38.605 06:52:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.605 06:52:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:38.605 06:52:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:38.605 06:52:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:38.605 06:52:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:38.605 06:52:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.605 06:52:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.605 06:52:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.605 06:52:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:38.605 06:52:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.606 06:52:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.606 06:52:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:38.606 06:52:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.606 06:52:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.606 06:52:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:38.606 06:52:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:38.606 06:52:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.606 06:52:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.606 06:52:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.606 06:52:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.606 06:52:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:38.606 06:52:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.606 06:52:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.606 06:52:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.606 06:52:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:38.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:12:38.606 00:12:38.606 --- 10.0.0.2 ping statistics --- 00:12:38.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.606 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:12:38.606 06:52:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:12:38.606 00:12:38.606 --- 10.0.0.1 ping statistics --- 00:12:38.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.606 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:38.606 06:52:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.606 06:52:45 -- nvmf/common.sh@410 -- # return 0 00:12:38.606 06:52:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:38.606 06:52:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.606 06:52:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:38.606 06:52:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:38.606 06:52:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.606 06:52:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:38.606 06:52:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:38.606 06:52:45 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:38.606 06:52:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:38.606 06:52:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:38.606 06:52:45 -- common/autotest_common.sh@10 -- # set +x 00:12:38.606 06:52:45 -- nvmf/common.sh@469 -- # nvmfpid=2991521 00:12:38.606 06:52:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:38.606 06:52:45 -- nvmf/common.sh@470 -- # waitforlisten 2991521 00:12:38.606 06:52:45 -- common/autotest_common.sh@819 -- # '[' -z 2991521 ']' 00:12:38.606 06:52:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.606 06:52:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:38.606 06:52:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.606 06:52:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:38.606 06:52:45 -- common/autotest_common.sh@10 -- # set +x 00:12:38.606 [2024-05-12 06:52:45.461769] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:38.606 [2024-05-12 06:52:45.461857] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.606 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.606 [2024-05-12 06:52:45.531192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:38.606 [2024-05-12 06:52:45.645167] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:38.606 [2024-05-12 06:52:45.645335] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.606 [2024-05-12 06:52:45.645355] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.606 [2024-05-12 06:52:45.645369] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.606 [2024-05-12 06:52:45.645454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.606 [2024-05-12 06:52:45.645570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.606 [2024-05-12 06:52:45.645573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.540 06:52:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:39.540 06:52:46 -- common/autotest_common.sh@852 -- # return 0 00:12:39.540 06:52:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:39.540 06:52:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:39.540 06:52:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.540 06:52:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.540 06:52:46 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.540 06:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.540 06:52:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.540 [2024-05-12 06:52:46.422991] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.540 06:52:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.540 06:52:46 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:39.540 06:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.540 06:52:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.540 06:52:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.540 06:52:46 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.540 06:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.540 06:52:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.540 [2024-05-12 06:52:46.447835] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.540 06:52:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.540 06:52:46 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:39.540 06:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.540 06:52:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.540 NULL1 00:12:39.540 06:52:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.540 06:52:46 -- target/connect_stress.sh@21 -- # PERF_PID=2991680 00:12:39.540 06:52:46 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:39.540 06:52:46 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:39.540 06:52:46 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.540 06:52:46 -- target/connect_stress.sh@28 -- # cat 00:12:39.540 06:52:46 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:39.540 06:52:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.540 06:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.540 06:52:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.798 06:52:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.798 06:52:46 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:39.798 06:52:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.798 06:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.798 06:52:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.056 06:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.056 06:52:47 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:40.056 06:52:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.056 06:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.056 06:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:40.621 06:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.621 06:52:47 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:40.621 06:52:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.621 06:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.621 06:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:40.879 06:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.879 06:52:47 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:40.879 06:52:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.879 06:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.879 06:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:41.137 06:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.137 06:52:48 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:41.137 06:52:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.137 06:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.137 06:52:48 -- common/autotest_common.sh@10 -- # set +x 00:12:41.394 06:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.394 06:52:48 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:41.394 06:52:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.394 06:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.394 06:52:48 -- common/autotest_common.sh@10 -- # set +x 00:12:41.652 06:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.652 06:52:48 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:41.652 06:52:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.652 06:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.652 06:52:48 -- common/autotest_common.sh@10 -- # set +x 00:12:42.217 06:52:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.217 06:52:49 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:42.217 06:52:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.217 06:52:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.217 06:52:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.475 06:52:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.475 06:52:49 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:42.475 06:52:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.475 06:52:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.475 06:52:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.733 06:52:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.733 06:52:49 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:42.733 06:52:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.733 06:52:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.733 06:52:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.990 06:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.990 06:52:50 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:42.990 06:52:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.990 06:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.990 06:52:50 -- common/autotest_common.sh@10 -- # set +x 00:12:43.247 06:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.247 06:52:50 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:43.247 06:52:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.247 06:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.247 06:52:50 -- common/autotest_common.sh@10 -- # set +x 00:12:43.864 06:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.864 06:52:50 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:43.864 06:52:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.864 06:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.864 06:52:50 -- common/autotest_common.sh@10 -- # set +x 00:12:44.121 06:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:44.121 06:52:51 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:44.121 06:52:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.121 06:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:44.121 06:52:51 -- common/autotest_common.sh@10 -- # set +x 00:12:44.380 06:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:44.380 06:52:51 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:44.380 06:52:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.380 06:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:44.380 06:52:51 -- common/autotest_common.sh@10 -- # set +x 00:12:44.637 06:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:44.637 06:52:51 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:44.637 06:52:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.637 06:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:44.637 06:52:51 -- common/autotest_common.sh@10 -- # set +x 00:12:44.895 06:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:44.895 06:52:51 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:44.895 06:52:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.895 06:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:44.895 06:52:51 -- common/autotest_common.sh@10 -- # set +x 00:12:45.460 06:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.460 06:52:52 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:45.460 06:52:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.460 06:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.460 06:52:52 -- common/autotest_common.sh@10 -- # set +x 00:12:45.718 06:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.718 06:52:52 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:45.718 06:52:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.718 06:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.718 06:52:52 -- common/autotest_common.sh@10 -- # set +x 00:12:45.975 06:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.975 06:52:52 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:45.975 06:52:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.975 06:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.975 06:52:52 -- common/autotest_common.sh@10 -- # set +x 00:12:46.233 06:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.233 06:52:53 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:46.233 06:52:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.233 06:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.233 06:52:53 -- common/autotest_common.sh@10 -- # set +x 00:12:46.490 06:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.490 06:52:53 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:46.490 06:52:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.490 06:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.490 06:52:53 -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 06:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.054 06:52:53 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:47.054 06:52:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.054 06:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.054 06:52:53 -- common/autotest_common.sh@10 -- # set +x 00:12:47.312 06:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.312 06:52:54 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:47.312 06:52:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.312 06:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.312 06:52:54 -- common/autotest_common.sh@10 -- # set +x 00:12:47.569 06:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.569 06:52:54 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:47.569 06:52:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.569 06:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.569 06:52:54 -- common/autotest_common.sh@10 -- # set +x 00:12:47.826 06:52:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.826 06:52:54 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:47.826 06:52:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.826 06:52:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.826 06:52:54 -- common/autotest_common.sh@10 -- # set +x 00:12:48.083 06:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.083 06:52:55 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:48.083 06:52:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.083 06:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.083 06:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:48.648 06:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.648 06:52:55 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:48.648 06:52:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.648 06:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.648 06:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:48.905 06:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.905 06:52:55 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:48.905 06:52:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.905 06:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.905 06:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:49.162 06:52:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.162 06:52:56 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:49.162 06:52:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.162 06:52:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.162 06:52:56 -- common/autotest_common.sh@10 -- # set +x 00:12:49.419 06:52:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.419 06:52:56 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:49.419 06:52:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.419 06:52:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.419 06:52:56 -- common/autotest_common.sh@10 -- # set +x 00:12:49.419 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:49.676 06:52:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.676 06:52:56 -- target/connect_stress.sh@34 -- # kill -0 2991680 00:12:49.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2991680) - No such process 00:12:49.677 06:52:56 -- target/connect_stress.sh@38 -- # wait 2991680 00:12:49.677 06:52:56 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:49.677 06:52:56 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:49.677 06:52:56 -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:49.677 06:52:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:49.677 06:52:56 -- nvmf/common.sh@116 -- # sync 00:12:49.677 06:52:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:49.677 06:52:56 -- nvmf/common.sh@119 -- # set +e 00:12:49.677 06:52:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:49.677 06:52:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:49.677 rmmod nvme_tcp 00:12:49.935 rmmod nvme_fabrics 00:12:49.935 rmmod nvme_keyring 00:12:49.935 06:52:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:49.935 06:52:56 -- nvmf/common.sh@123 -- # set -e 00:12:49.935 06:52:56 -- nvmf/common.sh@124 -- # return 0 00:12:49.935 06:52:56 -- nvmf/common.sh@477 -- # '[' -n 2991521 ']' 00:12:49.935 06:52:56 -- nvmf/common.sh@478 -- # killprocess 2991521 00:12:49.935 06:52:56 -- common/autotest_common.sh@926 -- # '[' -z 2991521 ']' 00:12:49.935 06:52:56 -- common/autotest_common.sh@930 -- # kill -0 2991521 00:12:49.935 06:52:56 -- common/autotest_common.sh@931 -- # uname 00:12:49.935 06:52:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:49.935 06:52:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2991521 00:12:49.935 06:52:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:49.935 06:52:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:49.935 06:52:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2991521' 00:12:49.935 killing process with pid 2991521 00:12:49.935 06:52:56 -- common/autotest_common.sh@945 -- # kill 2991521 00:12:49.935 06:52:56 -- common/autotest_common.sh@950 -- # wait 2991521 00:12:50.192 06:52:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:50.192 06:52:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:50.192 06:52:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:50.192 06:52:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.192 06:52:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:50.192 06:52:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.192 06:52:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.192 06:52:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.095 06:52:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:52.095 00:12:52.095 real 0m15.868s 00:12:52.095 user 0m40.244s 00:12:52.095 sys 0m5.913s 00:12:52.095 06:52:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.095 06:52:59 -- common/autotest_common.sh@10 -- # set +x 00:12:52.095 ************************************ 00:12:52.095 END TEST nvmf_connect_stress 00:12:52.095 ************************************ 00:12:52.095 06:52:59 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:52.095 06:52:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:52.095 06:52:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:52.096 06:52:59 -- common/autotest_common.sh@10 -- # set +x 00:12:52.096 ************************************ 00:12:52.096 START TEST nvmf_fused_ordering 00:12:52.096 ************************************ 00:12:52.096 06:52:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:52.353 * Looking for test storage... 00:12:52.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.353 06:52:59 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.353 06:52:59 -- nvmf/common.sh@7 -- # uname -s 00:12:52.353 06:52:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.353 06:52:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.353 06:52:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.353 06:52:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.353 06:52:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.353 06:52:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.353 06:52:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.353 06:52:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.353 06:52:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.353 06:52:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.353 06:52:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.353 06:52:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.353 06:52:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.353 06:52:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.353 06:52:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.353 06:52:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.353 06:52:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.353 06:52:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.353 06:52:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.353 06:52:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.353 06:52:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.353 06:52:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.353 06:52:59 -- paths/export.sh@5 -- # export PATH 00:12:52.354 06:52:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.354 06:52:59 -- nvmf/common.sh@46 -- # : 0 00:12:52.354 06:52:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:52.354 06:52:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:52.354 06:52:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:52.354 06:52:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.354 06:52:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.354 06:52:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:52.354 06:52:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:52.354 06:52:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:52.354 06:52:59 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:52.354 06:52:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:52.354 06:52:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.354 06:52:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:52.354 06:52:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:52.354 06:52:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:52.354 06:52:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.354 06:52:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.354 06:52:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.354 06:52:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:52.354 06:52:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:52.354 06:52:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:52.354 06:52:59 -- common/autotest_common.sh@10 -- # set +x 00:12:54.261 06:53:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:54.261 06:53:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:54.261 06:53:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:54.261 06:53:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:54.261 06:53:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:54.261 06:53:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:54.261 06:53:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:54.261 06:53:01 -- nvmf/common.sh@294 -- # net_devs=() 00:12:54.261 06:53:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:54.261 06:53:01 -- nvmf/common.sh@295 -- # e810=() 00:12:54.261 06:53:01 -- nvmf/common.sh@295 -- # local -ga e810 00:12:54.261 06:53:01 -- nvmf/common.sh@296 -- # x722=() 00:12:54.261 06:53:01 -- nvmf/common.sh@296 -- # local -ga x722 00:12:54.261 06:53:01 -- nvmf/common.sh@297 -- # mlx=() 00:12:54.261 06:53:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:54.261 06:53:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.261 06:53:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.261 06:53:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.261 06:53:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.261 06:53:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.261 06:53:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.261 06:53:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.261 06:53:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.261 06:53:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.261 06:53:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.261 06:53:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.261 06:53:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:54.261 06:53:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:54.261 06:53:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:54.261 06:53:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:54.261 06:53:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:54.261 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:54.261 06:53:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:54.261 06:53:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:54.261 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:54.261 06:53:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:54.261 06:53:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:54.261 06:53:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.261 06:53:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:54.261 06:53:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.261 06:53:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:54.261 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:54.261 06:53:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.261 06:53:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:54.261 06:53:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.261 06:53:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:54.261 06:53:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.261 06:53:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:54.261 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:54.261 06:53:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.261 06:53:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:54.261 06:53:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:54.261 06:53:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:54.261 06:53:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:54.261 06:53:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.261 06:53:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.261 06:53:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.261 06:53:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:54.261 06:53:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.261 06:53:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.261 06:53:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:54.261 06:53:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.261 06:53:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.261 06:53:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:54.261 06:53:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:54.261 06:53:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.261 06:53:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.520 06:53:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.520 06:53:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.520 06:53:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:54.520 06:53:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.520 06:53:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.520 06:53:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.520 06:53:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:54.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:12:54.520 00:12:54.520 --- 10.0.0.2 ping statistics --- 00:12:54.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.520 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:12:54.520 06:53:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:12:54.520 00:12:54.520 --- 10.0.0.1 ping statistics --- 00:12:54.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.520 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:54.520 06:53:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.520 06:53:01 -- nvmf/common.sh@410 -- # return 0 00:12:54.520 06:53:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:54.520 06:53:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.520 06:53:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:54.520 06:53:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:54.520 06:53:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.520 06:53:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:54.520 06:53:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:54.520 06:53:01 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:54.520 06:53:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:54.520 06:53:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:54.520 06:53:01 -- common/autotest_common.sh@10 -- # set +x 00:12:54.520 06:53:01 -- nvmf/common.sh@469 -- # nvmfpid=2994868 00:12:54.520 06:53:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:54.520 06:53:01 -- nvmf/common.sh@470 -- # waitforlisten 2994868 00:12:54.520 06:53:01 -- common/autotest_common.sh@819 -- # '[' -z 2994868 ']' 00:12:54.520 06:53:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.520 06:53:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:54.520 06:53:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.520 06:53:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:54.520 06:53:01 -- common/autotest_common.sh@10 -- # set +x 00:12:54.520 [2024-05-12 06:53:01.547313] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:54.520 [2024-05-12 06:53:01.547399] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.520 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.520 [2024-05-12 06:53:01.611869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.778 [2024-05-12 06:53:01.722534] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:54.778 [2024-05-12 06:53:01.722702] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.778 [2024-05-12 06:53:01.722727] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.778 [2024-05-12 06:53:01.722740] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.778 [2024-05-12 06:53:01.722765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.708 06:53:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:55.708 06:53:02 -- common/autotest_common.sh@852 -- # return 0 00:12:55.708 06:53:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:55.708 06:53:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:55.708 06:53:02 -- common/autotest_common.sh@10 -- # set +x 00:12:55.708 06:53:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.708 06:53:02 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.708 06:53:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.708 06:53:02 -- common/autotest_common.sh@10 -- # set +x 00:12:55.708 [2024-05-12 06:53:02.565536] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.708 06:53:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.708 06:53:02 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:55.708 06:53:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.708 06:53:02 -- common/autotest_common.sh@10 -- # set +x 00:12:55.708 06:53:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.708 06:53:02 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.708 06:53:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.708 06:53:02 -- common/autotest_common.sh@10 -- # set +x 00:12:55.708 [2024-05-12 06:53:02.581665] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.708 06:53:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.708 06:53:02 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:55.708 06:53:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.708 06:53:02 -- common/autotest_common.sh@10 -- # set +x 00:12:55.708 NULL1 00:12:55.708 06:53:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.708 06:53:02 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:55.708 06:53:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.708 06:53:02 -- common/autotest_common.sh@10 -- # set +x 00:12:55.708 06:53:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.708 06:53:02 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:55.708 06:53:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.708 06:53:02 -- common/autotest_common.sh@10 -- # set +x 00:12:55.708 06:53:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.708 06:53:02 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:55.708 [2024-05-12 06:53:02.626313] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:55.708 [2024-05-12 06:53:02.626355] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995031 ] 00:12:55.708 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.273 Attached to nqn.2016-06.io.spdk:cnode1 00:12:56.273 Namespace ID: 1 size: 1GB 00:12:56.273 fused_ordering(0) 00:12:56.273 fused_ordering(1) 00:12:56.273 fused_ordering(2) 00:12:56.273 fused_ordering(3) 00:12:56.273 fused_ordering(4) 00:12:56.273 fused_ordering(5) 00:12:56.273 fused_ordering(6) 00:12:56.273 fused_ordering(7) 00:12:56.273 fused_ordering(8) 00:12:56.273 fused_ordering(9) 00:12:56.273 fused_ordering(10) 00:12:56.273 fused_ordering(11) 00:12:56.273 fused_ordering(12) 00:12:56.273 fused_ordering(13) 00:12:56.273 fused_ordering(14) 00:12:56.273 fused_ordering(15) 00:12:56.273 fused_ordering(16) 00:12:56.273 fused_ordering(17) 00:12:56.273 fused_ordering(18) 00:12:56.273 fused_ordering(19) 00:12:56.273 fused_ordering(20) 00:12:56.273 fused_ordering(21) 00:12:56.273 fused_ordering(22) 00:12:56.273 fused_ordering(23) 00:12:56.273 fused_ordering(24) 00:12:56.273 fused_ordering(25) 00:12:56.273 fused_ordering(26) 00:12:56.273 fused_ordering(27) 00:12:56.273 fused_ordering(28) 00:12:56.273 fused_ordering(29) 00:12:56.273 fused_ordering(30) 00:12:56.273 fused_ordering(31) 00:12:56.273 fused_ordering(32) 00:12:56.273 fused_ordering(33) 00:12:56.273 fused_ordering(34) 00:12:56.273 fused_ordering(35) 00:12:56.273 fused_ordering(36) 00:12:56.273 fused_ordering(37) 00:12:56.273 fused_ordering(38) 00:12:56.273 fused_ordering(39) 00:12:56.273 fused_ordering(40) 00:12:56.273 fused_ordering(41) 00:12:56.273 fused_ordering(42) 00:12:56.273 fused_ordering(43) 00:12:56.273 fused_ordering(44) 00:12:56.273 fused_ordering(45) 00:12:56.273 fused_ordering(46) 00:12:56.273 fused_ordering(47) 00:12:56.273 fused_ordering(48) 00:12:56.273 fused_ordering(49) 00:12:56.273 fused_ordering(50) 00:12:56.273 fused_ordering(51) 00:12:56.273 fused_ordering(52) 00:12:56.273 fused_ordering(53) 00:12:56.273 fused_ordering(54) 00:12:56.273 fused_ordering(55) 00:12:56.273 fused_ordering(56) 00:12:56.273 fused_ordering(57) 00:12:56.273 fused_ordering(58) 00:12:56.273 fused_ordering(59) 00:12:56.273 fused_ordering(60) 00:12:56.273 fused_ordering(61) 00:12:56.273 fused_ordering(62) 00:12:56.273 fused_ordering(63) 00:12:56.273 fused_ordering(64) 00:12:56.273 fused_ordering(65) 00:12:56.273 fused_ordering(66) 00:12:56.273 fused_ordering(67) 00:12:56.273 fused_ordering(68) 00:12:56.274 fused_ordering(69) 00:12:56.274 fused_ordering(70) 00:12:56.274 fused_ordering(71) 00:12:56.274 fused_ordering(72) 00:12:56.274 fused_ordering(73) 00:12:56.274 fused_ordering(74) 00:12:56.274 fused_ordering(75) 00:12:56.274 fused_ordering(76) 00:12:56.274 fused_ordering(77) 00:12:56.274 fused_ordering(78) 00:12:56.274 fused_ordering(79) 00:12:56.274 fused_ordering(80) 00:12:56.274 fused_ordering(81) 00:12:56.274 fused_ordering(82) 00:12:56.274 fused_ordering(83) 00:12:56.274 fused_ordering(84) 00:12:56.274 fused_ordering(85) 00:12:56.274 fused_ordering(86) 00:12:56.274 fused_ordering(87) 00:12:56.274 fused_ordering(88) 00:12:56.274 fused_ordering(89) 00:12:56.274 fused_ordering(90) 00:12:56.274 fused_ordering(91) 00:12:56.274 fused_ordering(92) 00:12:56.274 fused_ordering(93) 00:12:56.274 fused_ordering(94) 00:12:56.274 fused_ordering(95) 00:12:56.274 fused_ordering(96) 00:12:56.274 fused_ordering(97) 00:12:56.274 fused_ordering(98) 00:12:56.274 fused_ordering(99) 00:12:56.274 fused_ordering(100) 00:12:56.274 fused_ordering(101) 00:12:56.274 fused_ordering(102) 00:12:56.274 fused_ordering(103) 00:12:56.274 fused_ordering(104) 00:12:56.274 fused_ordering(105) 00:12:56.274 fused_ordering(106) 00:12:56.274 fused_ordering(107) 00:12:56.274 fused_ordering(108) 00:12:56.274 fused_ordering(109) 00:12:56.274 fused_ordering(110) 00:12:56.274 fused_ordering(111) 00:12:56.274 fused_ordering(112) 00:12:56.274 fused_ordering(113) 00:12:56.274 fused_ordering(114) 00:12:56.274 fused_ordering(115) 00:12:56.274 fused_ordering(116) 00:12:56.274 fused_ordering(117) 00:12:56.274 fused_ordering(118) 00:12:56.274 fused_ordering(119) 00:12:56.274 fused_ordering(120) 00:12:56.274 fused_ordering(121) 00:12:56.274 fused_ordering(122) 00:12:56.274 fused_ordering(123) 00:12:56.274 fused_ordering(124) 00:12:56.274 fused_ordering(125) 00:12:56.274 fused_ordering(126) 00:12:56.274 fused_ordering(127) 00:12:56.274 fused_ordering(128) 00:12:56.274 fused_ordering(129) 00:12:56.274 fused_ordering(130) 00:12:56.274 fused_ordering(131) 00:12:56.274 fused_ordering(132) 00:12:56.274 fused_ordering(133) 00:12:56.274 fused_ordering(134) 00:12:56.274 fused_ordering(135) 00:12:56.274 fused_ordering(136) 00:12:56.274 fused_ordering(137) 00:12:56.274 fused_ordering(138) 00:12:56.274 fused_ordering(139) 00:12:56.274 fused_ordering(140) 00:12:56.274 fused_ordering(141) 00:12:56.274 fused_ordering(142) 00:12:56.274 fused_ordering(143) 00:12:56.274 fused_ordering(144) 00:12:56.274 fused_ordering(145) 00:12:56.274 fused_ordering(146) 00:12:56.274 fused_ordering(147) 00:12:56.274 fused_ordering(148) 00:12:56.274 fused_ordering(149) 00:12:56.274 fused_ordering(150) 00:12:56.274 fused_ordering(151) 00:12:56.274 fused_ordering(152) 00:12:56.274 fused_ordering(153) 00:12:56.274 fused_ordering(154) 00:12:56.274 fused_ordering(155) 00:12:56.274 fused_ordering(156) 00:12:56.274 fused_ordering(157) 00:12:56.274 fused_ordering(158) 00:12:56.274 fused_ordering(159) 00:12:56.274 fused_ordering(160) 00:12:56.274 fused_ordering(161) 00:12:56.274 fused_ordering(162) 00:12:56.274 fused_ordering(163) 00:12:56.274 fused_ordering(164) 00:12:56.274 fused_ordering(165) 00:12:56.274 fused_ordering(166) 00:12:56.274 fused_ordering(167) 00:12:56.274 fused_ordering(168) 00:12:56.274 fused_ordering(169) 00:12:56.274 fused_ordering(170) 00:12:56.274 fused_ordering(171) 00:12:56.274 fused_ordering(172) 00:12:56.274 fused_ordering(173) 00:12:56.274 fused_ordering(174) 00:12:56.274 fused_ordering(175) 00:12:56.274 fused_ordering(176) 00:12:56.274 fused_ordering(177) 00:12:56.274 fused_ordering(178) 00:12:56.274 fused_ordering(179) 00:12:56.274 fused_ordering(180) 00:12:56.274 fused_ordering(181) 00:12:56.274 fused_ordering(182) 00:12:56.274 fused_ordering(183) 00:12:56.274 fused_ordering(184) 00:12:56.274 fused_ordering(185) 00:12:56.274 fused_ordering(186) 00:12:56.274 fused_ordering(187) 00:12:56.274 fused_ordering(188) 00:12:56.274 fused_ordering(189) 00:12:56.274 fused_ordering(190) 00:12:56.274 fused_ordering(191) 00:12:56.274 fused_ordering(192) 00:12:56.274 fused_ordering(193) 00:12:56.274 fused_ordering(194) 00:12:56.274 fused_ordering(195) 00:12:56.274 fused_ordering(196) 00:12:56.274 fused_ordering(197) 00:12:56.274 fused_ordering(198) 00:12:56.274 fused_ordering(199) 00:12:56.274 fused_ordering(200) 00:12:56.274 fused_ordering(201) 00:12:56.274 fused_ordering(202) 00:12:56.274 fused_ordering(203) 00:12:56.274 fused_ordering(204) 00:12:56.274 fused_ordering(205) 00:12:56.839 fused_ordering(206) 00:12:56.839 fused_ordering(207) 00:12:56.839 fused_ordering(208) 00:12:56.839 fused_ordering(209) 00:12:56.839 fused_ordering(210) 00:12:56.839 fused_ordering(211) 00:12:56.839 fused_ordering(212) 00:12:56.839 fused_ordering(213) 00:12:56.839 fused_ordering(214) 00:12:56.839 fused_ordering(215) 00:12:56.839 fused_ordering(216) 00:12:56.839 fused_ordering(217) 00:12:56.839 fused_ordering(218) 00:12:56.839 fused_ordering(219) 00:12:56.839 fused_ordering(220) 00:12:56.839 fused_ordering(221) 00:12:56.839 fused_ordering(222) 00:12:56.839 fused_ordering(223) 00:12:56.839 fused_ordering(224) 00:12:56.839 fused_ordering(225) 00:12:56.839 fused_ordering(226) 00:12:56.839 fused_ordering(227) 00:12:56.839 fused_ordering(228) 00:12:56.839 fused_ordering(229) 00:12:56.839 fused_ordering(230) 00:12:56.839 fused_ordering(231) 00:12:56.839 fused_ordering(232) 00:12:56.839 fused_ordering(233) 00:12:56.839 fused_ordering(234) 00:12:56.839 fused_ordering(235) 00:12:56.839 fused_ordering(236) 00:12:56.839 fused_ordering(237) 00:12:56.839 fused_ordering(238) 00:12:56.839 fused_ordering(239) 00:12:56.839 fused_ordering(240) 00:12:56.839 fused_ordering(241) 00:12:56.839 fused_ordering(242) 00:12:56.839 fused_ordering(243) 00:12:56.839 fused_ordering(244) 00:12:56.839 fused_ordering(245) 00:12:56.839 fused_ordering(246) 00:12:56.839 fused_ordering(247) 00:12:56.839 fused_ordering(248) 00:12:56.839 fused_ordering(249) 00:12:56.839 fused_ordering(250) 00:12:56.839 fused_ordering(251) 00:12:56.839 fused_ordering(252) 00:12:56.839 fused_ordering(253) 00:12:56.839 fused_ordering(254) 00:12:56.839 fused_ordering(255) 00:12:56.839 fused_ordering(256) 00:12:56.839 fused_ordering(257) 00:12:56.839 fused_ordering(258) 00:12:56.839 fused_ordering(259) 00:12:56.839 fused_ordering(260) 00:12:56.839 fused_ordering(261) 00:12:56.839 fused_ordering(262) 00:12:56.839 fused_ordering(263) 00:12:56.839 fused_ordering(264) 00:12:56.839 fused_ordering(265) 00:12:56.839 fused_ordering(266) 00:12:56.840 fused_ordering(267) 00:12:56.840 fused_ordering(268) 00:12:56.840 fused_ordering(269) 00:12:56.840 fused_ordering(270) 00:12:56.840 fused_ordering(271) 00:12:56.840 fused_ordering(272) 00:12:56.840 fused_ordering(273) 00:12:56.840 fused_ordering(274) 00:12:56.840 fused_ordering(275) 00:12:56.840 fused_ordering(276) 00:12:56.840 fused_ordering(277) 00:12:56.840 fused_ordering(278) 00:12:56.840 fused_ordering(279) 00:12:56.840 fused_ordering(280) 00:12:56.840 fused_ordering(281) 00:12:56.840 fused_ordering(282) 00:12:56.840 fused_ordering(283) 00:12:56.840 fused_ordering(284) 00:12:56.840 fused_ordering(285) 00:12:56.840 fused_ordering(286) 00:12:56.840 fused_ordering(287) 00:12:56.840 fused_ordering(288) 00:12:56.840 fused_ordering(289) 00:12:56.840 fused_ordering(290) 00:12:56.840 fused_ordering(291) 00:12:56.840 fused_ordering(292) 00:12:56.840 fused_ordering(293) 00:12:56.840 fused_ordering(294) 00:12:56.840 fused_ordering(295) 00:12:56.840 fused_ordering(296) 00:12:56.840 fused_ordering(297) 00:12:56.840 fused_ordering(298) 00:12:56.840 fused_ordering(299) 00:12:56.840 fused_ordering(300) 00:12:56.840 fused_ordering(301) 00:12:56.840 fused_ordering(302) 00:12:56.840 fused_ordering(303) 00:12:56.840 fused_ordering(304) 00:12:56.840 fused_ordering(305) 00:12:56.840 fused_ordering(306) 00:12:56.840 fused_ordering(307) 00:12:56.840 fused_ordering(308) 00:12:56.840 fused_ordering(309) 00:12:56.840 fused_ordering(310) 00:12:56.840 fused_ordering(311) 00:12:56.840 fused_ordering(312) 00:12:56.840 fused_ordering(313) 00:12:56.840 fused_ordering(314) 00:12:56.840 fused_ordering(315) 00:12:56.840 fused_ordering(316) 00:12:56.840 fused_ordering(317) 00:12:56.840 fused_ordering(318) 00:12:56.840 fused_ordering(319) 00:12:56.840 fused_ordering(320) 00:12:56.840 fused_ordering(321) 00:12:56.840 fused_ordering(322) 00:12:56.840 fused_ordering(323) 00:12:56.840 fused_ordering(324) 00:12:56.840 fused_ordering(325) 00:12:56.840 fused_ordering(326) 00:12:56.840 fused_ordering(327) 00:12:56.840 fused_ordering(328) 00:12:56.840 fused_ordering(329) 00:12:56.840 fused_ordering(330) 00:12:56.840 fused_ordering(331) 00:12:56.840 fused_ordering(332) 00:12:56.840 fused_ordering(333) 00:12:56.840 fused_ordering(334) 00:12:56.840 fused_ordering(335) 00:12:56.840 fused_ordering(336) 00:12:56.840 fused_ordering(337) 00:12:56.840 fused_ordering(338) 00:12:56.840 fused_ordering(339) 00:12:56.840 fused_ordering(340) 00:12:56.840 fused_ordering(341) 00:12:56.840 fused_ordering(342) 00:12:56.840 fused_ordering(343) 00:12:56.840 fused_ordering(344) 00:12:56.840 fused_ordering(345) 00:12:56.840 fused_ordering(346) 00:12:56.840 fused_ordering(347) 00:12:56.840 fused_ordering(348) 00:12:56.840 fused_ordering(349) 00:12:56.840 fused_ordering(350) 00:12:56.840 fused_ordering(351) 00:12:56.840 fused_ordering(352) 00:12:56.840 fused_ordering(353) 00:12:56.840 fused_ordering(354) 00:12:56.840 fused_ordering(355) 00:12:56.840 fused_ordering(356) 00:12:56.840 fused_ordering(357) 00:12:56.840 fused_ordering(358) 00:12:56.840 fused_ordering(359) 00:12:56.840 fused_ordering(360) 00:12:56.840 fused_ordering(361) 00:12:56.840 fused_ordering(362) 00:12:56.840 fused_ordering(363) 00:12:56.840 fused_ordering(364) 00:12:56.840 fused_ordering(365) 00:12:56.840 fused_ordering(366) 00:12:56.840 fused_ordering(367) 00:12:56.840 fused_ordering(368) 00:12:56.840 fused_ordering(369) 00:12:56.840 fused_ordering(370) 00:12:56.840 fused_ordering(371) 00:12:56.840 fused_ordering(372) 00:12:56.840 fused_ordering(373) 00:12:56.840 fused_ordering(374) 00:12:56.840 fused_ordering(375) 00:12:56.840 fused_ordering(376) 00:12:56.840 fused_ordering(377) 00:12:56.840 fused_ordering(378) 00:12:56.840 fused_ordering(379) 00:12:56.840 fused_ordering(380) 00:12:56.840 fused_ordering(381) 00:12:56.840 fused_ordering(382) 00:12:56.840 fused_ordering(383) 00:12:56.840 fused_ordering(384) 00:12:56.840 fused_ordering(385) 00:12:56.840 fused_ordering(386) 00:12:56.840 fused_ordering(387) 00:12:56.840 fused_ordering(388) 00:12:56.840 fused_ordering(389) 00:12:56.840 fused_ordering(390) 00:12:56.840 fused_ordering(391) 00:12:56.840 fused_ordering(392) 00:12:56.840 fused_ordering(393) 00:12:56.840 fused_ordering(394) 00:12:56.840 fused_ordering(395) 00:12:56.840 fused_ordering(396) 00:12:56.840 fused_ordering(397) 00:12:56.840 fused_ordering(398) 00:12:56.840 fused_ordering(399) 00:12:56.840 fused_ordering(400) 00:12:56.840 fused_ordering(401) 00:12:56.840 fused_ordering(402) 00:12:56.840 fused_ordering(403) 00:12:56.840 fused_ordering(404) 00:12:56.840 fused_ordering(405) 00:12:56.840 fused_ordering(406) 00:12:56.840 fused_ordering(407) 00:12:56.840 fused_ordering(408) 00:12:56.840 fused_ordering(409) 00:12:56.840 fused_ordering(410) 00:12:57.776 fused_ordering(411) 00:12:57.776 fused_ordering(412) 00:12:57.776 fused_ordering(413) 00:12:57.776 fused_ordering(414) 00:12:57.776 fused_ordering(415) 00:12:57.776 fused_ordering(416) 00:12:57.776 fused_ordering(417) 00:12:57.776 fused_ordering(418) 00:12:57.776 fused_ordering(419) 00:12:57.776 fused_ordering(420) 00:12:57.776 fused_ordering(421) 00:12:57.776 fused_ordering(422) 00:12:57.776 fused_ordering(423) 00:12:57.776 fused_ordering(424) 00:12:57.776 fused_ordering(425) 00:12:57.776 fused_ordering(426) 00:12:57.776 fused_ordering(427) 00:12:57.776 fused_ordering(428) 00:12:57.776 fused_ordering(429) 00:12:57.776 fused_ordering(430) 00:12:57.776 fused_ordering(431) 00:12:57.776 fused_ordering(432) 00:12:57.776 fused_ordering(433) 00:12:57.776 fused_ordering(434) 00:12:57.776 fused_ordering(435) 00:12:57.776 fused_ordering(436) 00:12:57.776 fused_ordering(437) 00:12:57.776 fused_ordering(438) 00:12:57.776 fused_ordering(439) 00:12:57.776 fused_ordering(440) 00:12:57.776 fused_ordering(441) 00:12:57.776 fused_ordering(442) 00:12:57.776 fused_ordering(443) 00:12:57.776 fused_ordering(444) 00:12:57.776 fused_ordering(445) 00:12:57.776 fused_ordering(446) 00:12:57.776 fused_ordering(447) 00:12:57.776 fused_ordering(448) 00:12:57.776 fused_ordering(449) 00:12:57.776 fused_ordering(450) 00:12:57.776 fused_ordering(451) 00:12:57.776 fused_ordering(452) 00:12:57.776 fused_ordering(453) 00:12:57.776 fused_ordering(454) 00:12:57.776 fused_ordering(455) 00:12:57.776 fused_ordering(456) 00:12:57.776 fused_ordering(457) 00:12:57.776 fused_ordering(458) 00:12:57.776 fused_ordering(459) 00:12:57.776 fused_ordering(460) 00:12:57.776 fused_ordering(461) 00:12:57.776 fused_ordering(462) 00:12:57.776 fused_ordering(463) 00:12:57.776 fused_ordering(464) 00:12:57.776 fused_ordering(465) 00:12:57.776 fused_ordering(466) 00:12:57.776 fused_ordering(467) 00:12:57.776 fused_ordering(468) 00:12:57.776 fused_ordering(469) 00:12:57.776 fused_ordering(470) 00:12:57.776 fused_ordering(471) 00:12:57.776 fused_ordering(472) 00:12:57.776 fused_ordering(473) 00:12:57.776 fused_ordering(474) 00:12:57.776 fused_ordering(475) 00:12:57.776 fused_ordering(476) 00:12:57.776 fused_ordering(477) 00:12:57.776 fused_ordering(478) 00:12:57.776 fused_ordering(479) 00:12:57.776 fused_ordering(480) 00:12:57.776 fused_ordering(481) 00:12:57.776 fused_ordering(482) 00:12:57.776 fused_ordering(483) 00:12:57.776 fused_ordering(484) 00:12:57.776 fused_ordering(485) 00:12:57.776 fused_ordering(486) 00:12:57.776 fused_ordering(487) 00:12:57.776 fused_ordering(488) 00:12:57.776 fused_ordering(489) 00:12:57.776 fused_ordering(490) 00:12:57.776 fused_ordering(491) 00:12:57.776 fused_ordering(492) 00:12:57.776 fused_ordering(493) 00:12:57.776 fused_ordering(494) 00:12:57.776 fused_ordering(495) 00:12:57.776 fused_ordering(496) 00:12:57.776 fused_ordering(497) 00:12:57.776 fused_ordering(498) 00:12:57.776 fused_ordering(499) 00:12:57.776 fused_ordering(500) 00:12:57.776 fused_ordering(501) 00:12:57.776 fused_ordering(502) 00:12:57.776 fused_ordering(503) 00:12:57.776 fused_ordering(504) 00:12:57.776 fused_ordering(505) 00:12:57.776 fused_ordering(506) 00:12:57.776 fused_ordering(507) 00:12:57.776 fused_ordering(508) 00:12:57.776 fused_ordering(509) 00:12:57.776 fused_ordering(510) 00:12:57.776 fused_ordering(511) 00:12:57.776 fused_ordering(512) 00:12:57.776 fused_ordering(513) 00:12:57.776 fused_ordering(514) 00:12:57.776 fused_ordering(515) 00:12:57.776 fused_ordering(516) 00:12:57.776 fused_ordering(517) 00:12:57.776 fused_ordering(518) 00:12:57.776 fused_ordering(519) 00:12:57.776 fused_ordering(520) 00:12:57.776 fused_ordering(521) 00:12:57.776 fused_ordering(522) 00:12:57.776 fused_ordering(523) 00:12:57.776 fused_ordering(524) 00:12:57.776 fused_ordering(525) 00:12:57.776 fused_ordering(526) 00:12:57.776 fused_ordering(527) 00:12:57.776 fused_ordering(528) 00:12:57.776 fused_ordering(529) 00:12:57.776 fused_ordering(530) 00:12:57.776 fused_ordering(531) 00:12:57.776 fused_ordering(532) 00:12:57.776 fused_ordering(533) 00:12:57.776 fused_ordering(534) 00:12:57.776 fused_ordering(535) 00:12:57.776 fused_ordering(536) 00:12:57.776 fused_ordering(537) 00:12:57.776 fused_ordering(538) 00:12:57.776 fused_ordering(539) 00:12:57.776 fused_ordering(540) 00:12:57.776 fused_ordering(541) 00:12:57.776 fused_ordering(542) 00:12:57.776 fused_ordering(543) 00:12:57.776 fused_ordering(544) 00:12:57.776 fused_ordering(545) 00:12:57.776 fused_ordering(546) 00:12:57.776 fused_ordering(547) 00:12:57.776 fused_ordering(548) 00:12:57.776 fused_ordering(549) 00:12:57.776 fused_ordering(550) 00:12:57.776 fused_ordering(551) 00:12:57.776 fused_ordering(552) 00:12:57.776 fused_ordering(553) 00:12:57.776 fused_ordering(554) 00:12:57.776 fused_ordering(555) 00:12:57.776 fused_ordering(556) 00:12:57.776 fused_ordering(557) 00:12:57.776 fused_ordering(558) 00:12:57.776 fused_ordering(559) 00:12:57.776 fused_ordering(560) 00:12:57.776 fused_ordering(561) 00:12:57.776 fused_ordering(562) 00:12:57.776 fused_ordering(563) 00:12:57.776 fused_ordering(564) 00:12:57.776 fused_ordering(565) 00:12:57.776 fused_ordering(566) 00:12:57.776 fused_ordering(567) 00:12:57.776 fused_ordering(568) 00:12:57.776 fused_ordering(569) 00:12:57.776 fused_ordering(570) 00:12:57.776 fused_ordering(571) 00:12:57.776 fused_ordering(572) 00:12:57.776 fused_ordering(573) 00:12:57.776 fused_ordering(574) 00:12:57.776 fused_ordering(575) 00:12:57.776 fused_ordering(576) 00:12:57.776 fused_ordering(577) 00:12:57.776 fused_ordering(578) 00:12:57.776 fused_ordering(579) 00:12:57.776 fused_ordering(580) 00:12:57.776 fused_ordering(581) 00:12:57.776 fused_ordering(582) 00:12:57.776 fused_ordering(583) 00:12:57.776 fused_ordering(584) 00:12:57.776 fused_ordering(585) 00:12:57.776 fused_ordering(586) 00:12:57.776 fused_ordering(587) 00:12:57.776 fused_ordering(588) 00:12:57.776 fused_ordering(589) 00:12:57.776 fused_ordering(590) 00:12:57.776 fused_ordering(591) 00:12:57.776 fused_ordering(592) 00:12:57.776 fused_ordering(593) 00:12:57.776 fused_ordering(594) 00:12:57.776 fused_ordering(595) 00:12:57.776 fused_ordering(596) 00:12:57.776 fused_ordering(597) 00:12:57.776 fused_ordering(598) 00:12:57.776 fused_ordering(599) 00:12:57.776 fused_ordering(600) 00:12:57.776 fused_ordering(601) 00:12:57.776 fused_ordering(602) 00:12:57.776 fused_ordering(603) 00:12:57.776 fused_ordering(604) 00:12:57.776 fused_ordering(605) 00:12:57.776 fused_ordering(606) 00:12:57.776 fused_ordering(607) 00:12:57.776 fused_ordering(608) 00:12:57.776 fused_ordering(609) 00:12:57.776 fused_ordering(610) 00:12:57.776 fused_ordering(611) 00:12:57.776 fused_ordering(612) 00:12:57.776 fused_ordering(613) 00:12:57.776 fused_ordering(614) 00:12:57.776 fused_ordering(615) 00:12:58.374 fused_ordering(616) 00:12:58.374 fused_ordering(617) 00:12:58.374 fused_ordering(618) 00:12:58.374 fused_ordering(619) 00:12:58.374 fused_ordering(620) 00:12:58.374 fused_ordering(621) 00:12:58.374 fused_ordering(622) 00:12:58.374 fused_ordering(623) 00:12:58.374 fused_ordering(624) 00:12:58.374 fused_ordering(625) 00:12:58.374 fused_ordering(626) 00:12:58.374 fused_ordering(627) 00:12:58.374 fused_ordering(628) 00:12:58.374 fused_ordering(629) 00:12:58.374 fused_ordering(630) 00:12:58.374 fused_ordering(631) 00:12:58.374 fused_ordering(632) 00:12:58.374 fused_ordering(633) 00:12:58.374 fused_ordering(634) 00:12:58.374 fused_ordering(635) 00:12:58.374 fused_ordering(636) 00:12:58.374 fused_ordering(637) 00:12:58.374 fused_ordering(638) 00:12:58.374 fused_ordering(639) 00:12:58.374 fused_ordering(640) 00:12:58.374 fused_ordering(641) 00:12:58.374 fused_ordering(642) 00:12:58.374 fused_ordering(643) 00:12:58.374 fused_ordering(644) 00:12:58.374 fused_ordering(645) 00:12:58.374 fused_ordering(646) 00:12:58.374 fused_ordering(647) 00:12:58.374 fused_ordering(648) 00:12:58.374 fused_ordering(649) 00:12:58.374 fused_ordering(650) 00:12:58.374 fused_ordering(651) 00:12:58.374 fused_ordering(652) 00:12:58.374 fused_ordering(653) 00:12:58.374 fused_ordering(654) 00:12:58.374 fused_ordering(655) 00:12:58.374 fused_ordering(656) 00:12:58.374 fused_ordering(657) 00:12:58.374 fused_ordering(658) 00:12:58.374 fused_ordering(659) 00:12:58.374 fused_ordering(660) 00:12:58.374 fused_ordering(661) 00:12:58.374 fused_ordering(662) 00:12:58.374 fused_ordering(663) 00:12:58.374 fused_ordering(664) 00:12:58.374 fused_ordering(665) 00:12:58.374 fused_ordering(666) 00:12:58.374 fused_ordering(667) 00:12:58.374 fused_ordering(668) 00:12:58.374 fused_ordering(669) 00:12:58.374 fused_ordering(670) 00:12:58.374 fused_ordering(671) 00:12:58.374 fused_ordering(672) 00:12:58.374 fused_ordering(673) 00:12:58.374 fused_ordering(674) 00:12:58.374 fused_ordering(675) 00:12:58.374 fused_ordering(676) 00:12:58.374 fused_ordering(677) 00:12:58.374 fused_ordering(678) 00:12:58.374 fused_ordering(679) 00:12:58.374 fused_ordering(680) 00:12:58.374 fused_ordering(681) 00:12:58.374 fused_ordering(682) 00:12:58.374 fused_ordering(683) 00:12:58.374 fused_ordering(684) 00:12:58.374 fused_ordering(685) 00:12:58.374 fused_ordering(686) 00:12:58.374 fused_ordering(687) 00:12:58.374 fused_ordering(688) 00:12:58.374 fused_ordering(689) 00:12:58.374 fused_ordering(690) 00:12:58.374 fused_ordering(691) 00:12:58.374 fused_ordering(692) 00:12:58.374 fused_ordering(693) 00:12:58.374 fused_ordering(694) 00:12:58.374 fused_ordering(695) 00:12:58.374 fused_ordering(696) 00:12:58.374 fused_ordering(697) 00:12:58.374 fused_ordering(698) 00:12:58.374 fused_ordering(699) 00:12:58.374 fused_ordering(700) 00:12:58.374 fused_ordering(701) 00:12:58.374 fused_ordering(702) 00:12:58.374 fused_ordering(703) 00:12:58.374 fused_ordering(704) 00:12:58.374 fused_ordering(705) 00:12:58.374 fused_ordering(706) 00:12:58.374 fused_ordering(707) 00:12:58.375 fused_ordering(708) 00:12:58.375 fused_ordering(709) 00:12:58.375 fused_ordering(710) 00:12:58.375 fused_ordering(711) 00:12:58.375 fused_ordering(712) 00:12:58.375 fused_ordering(713) 00:12:58.375 fused_ordering(714) 00:12:58.375 fused_ordering(715) 00:12:58.375 fused_ordering(716) 00:12:58.375 fused_ordering(717) 00:12:58.375 fused_ordering(718) 00:12:58.375 fused_ordering(719) 00:12:58.375 fused_ordering(720) 00:12:58.375 fused_ordering(721) 00:12:58.375 fused_ordering(722) 00:12:58.375 fused_ordering(723) 00:12:58.375 fused_ordering(724) 00:12:58.375 fused_ordering(725) 00:12:58.375 fused_ordering(726) 00:12:58.375 fused_ordering(727) 00:12:58.375 fused_ordering(728) 00:12:58.375 fused_ordering(729) 00:12:58.375 fused_ordering(730) 00:12:58.375 fused_ordering(731) 00:12:58.375 fused_ordering(732) 00:12:58.375 fused_ordering(733) 00:12:58.375 fused_ordering(734) 00:12:58.375 fused_ordering(735) 00:12:58.375 fused_ordering(736) 00:12:58.375 fused_ordering(737) 00:12:58.375 fused_ordering(738) 00:12:58.375 fused_ordering(739) 00:12:58.375 fused_ordering(740) 00:12:58.375 fused_ordering(741) 00:12:58.375 fused_ordering(742) 00:12:58.375 fused_ordering(743) 00:12:58.375 fused_ordering(744) 00:12:58.375 fused_ordering(745) 00:12:58.375 fused_ordering(746) 00:12:58.375 fused_ordering(747) 00:12:58.375 fused_ordering(748) 00:12:58.375 fused_ordering(749) 00:12:58.375 fused_ordering(750) 00:12:58.375 fused_ordering(751) 00:12:58.375 fused_ordering(752) 00:12:58.375 fused_ordering(753) 00:12:58.375 fused_ordering(754) 00:12:58.375 fused_ordering(755) 00:12:58.375 fused_ordering(756) 00:12:58.375 fused_ordering(757) 00:12:58.375 fused_ordering(758) 00:12:58.375 fused_ordering(759) 00:12:58.375 fused_ordering(760) 00:12:58.375 fused_ordering(761) 00:12:58.375 fused_ordering(762) 00:12:58.375 fused_ordering(763) 00:12:58.375 fused_ordering(764) 00:12:58.375 fused_ordering(765) 00:12:58.375 fused_ordering(766) 00:12:58.375 fused_ordering(767) 00:12:58.375 fused_ordering(768) 00:12:58.375 fused_ordering(769) 00:12:58.375 fused_ordering(770) 00:12:58.375 fused_ordering(771) 00:12:58.375 fused_ordering(772) 00:12:58.375 fused_ordering(773) 00:12:58.375 fused_ordering(774) 00:12:58.375 fused_ordering(775) 00:12:58.375 fused_ordering(776) 00:12:58.375 fused_ordering(777) 00:12:58.375 fused_ordering(778) 00:12:58.375 fused_ordering(779) 00:12:58.375 fused_ordering(780) 00:12:58.375 fused_ordering(781) 00:12:58.375 fused_ordering(782) 00:12:58.375 fused_ordering(783) 00:12:58.375 fused_ordering(784) 00:12:58.375 fused_ordering(785) 00:12:58.375 fused_ordering(786) 00:12:58.375 fused_ordering(787) 00:12:58.375 fused_ordering(788) 00:12:58.375 fused_ordering(789) 00:12:58.375 fused_ordering(790) 00:12:58.375 fused_ordering(791) 00:12:58.375 fused_ordering(792) 00:12:58.375 fused_ordering(793) 00:12:58.375 fused_ordering(794) 00:12:58.375 fused_ordering(795) 00:12:58.375 fused_ordering(796) 00:12:58.375 fused_ordering(797) 00:12:58.375 fused_ordering(798) 00:12:58.375 fused_ordering(799) 00:12:58.375 fused_ordering(800) 00:12:58.375 fused_ordering(801) 00:12:58.375 fused_ordering(802) 00:12:58.375 fused_ordering(803) 00:12:58.375 fused_ordering(804) 00:12:58.375 fused_ordering(805) 00:12:58.375 fused_ordering(806) 00:12:58.375 fused_ordering(807) 00:12:58.375 fused_ordering(808) 00:12:58.375 fused_ordering(809) 00:12:58.375 fused_ordering(810) 00:12:58.375 fused_ordering(811) 00:12:58.375 fused_ordering(812) 00:12:58.375 fused_ordering(813) 00:12:58.375 fused_ordering(814) 00:12:58.375 fused_ordering(815) 00:12:58.375 fused_ordering(816) 00:12:58.375 fused_ordering(817) 00:12:58.375 fused_ordering(818) 00:12:58.375 fused_ordering(819) 00:12:58.375 fused_ordering(820) 00:12:58.939 fused_ordering(821) 00:12:58.939 fused_ordering(822) 00:12:58.939 fused_ordering(823) 00:12:58.939 fused_ordering(824) 00:12:58.939 fused_ordering(825) 00:12:58.939 fused_ordering(826) 00:12:58.939 fused_ordering(827) 00:12:58.939 fused_ordering(828) 00:12:58.939 fused_ordering(829) 00:12:58.939 fused_ordering(830) 00:12:58.939 fused_ordering(831) 00:12:58.939 fused_ordering(832) 00:12:58.939 fused_ordering(833) 00:12:58.939 fused_ordering(834) 00:12:58.939 fused_ordering(835) 00:12:58.939 fused_ordering(836) 00:12:58.939 fused_ordering(837) 00:12:58.939 fused_ordering(838) 00:12:58.939 fused_ordering(839) 00:12:58.939 fused_ordering(840) 00:12:58.939 fused_ordering(841) 00:12:58.939 fused_ordering(842) 00:12:58.939 fused_ordering(843) 00:12:58.939 fused_ordering(844) 00:12:58.939 fused_ordering(845) 00:12:58.939 fused_ordering(846) 00:12:58.939 fused_ordering(847) 00:12:58.939 fused_ordering(848) 00:12:58.939 fused_ordering(849) 00:12:58.939 fused_ordering(850) 00:12:58.939 fused_ordering(851) 00:12:58.939 fused_ordering(852) 00:12:58.939 fused_ordering(853) 00:12:58.939 fused_ordering(854) 00:12:58.939 fused_ordering(855) 00:12:58.939 fused_ordering(856) 00:12:58.939 fused_ordering(857) 00:12:58.939 fused_ordering(858) 00:12:58.939 fused_ordering(859) 00:12:58.939 fused_ordering(860) 00:12:58.939 fused_ordering(861) 00:12:58.939 fused_ordering(862) 00:12:58.939 fused_ordering(863) 00:12:58.939 fused_ordering(864) 00:12:58.939 fused_ordering(865) 00:12:58.939 fused_ordering(866) 00:12:58.939 fused_ordering(867) 00:12:58.939 fused_ordering(868) 00:12:58.939 fused_ordering(869) 00:12:58.939 fused_ordering(870) 00:12:58.939 fused_ordering(871) 00:12:58.939 fused_ordering(872) 00:12:58.939 fused_ordering(873) 00:12:58.939 fused_ordering(874) 00:12:58.939 fused_ordering(875) 00:12:58.939 fused_ordering(876) 00:12:58.939 fused_ordering(877) 00:12:58.939 fused_ordering(878) 00:12:58.939 fused_ordering(879) 00:12:58.939 fused_ordering(880) 00:12:58.939 fused_ordering(881) 00:12:58.939 fused_ordering(882) 00:12:58.939 fused_ordering(883) 00:12:58.939 fused_ordering(884) 00:12:58.939 fused_ordering(885) 00:12:58.939 fused_ordering(886) 00:12:58.939 fused_ordering(887) 00:12:58.939 fused_ordering(888) 00:12:58.939 fused_ordering(889) 00:12:58.939 fused_ordering(890) 00:12:58.939 fused_ordering(891) 00:12:58.939 fused_ordering(892) 00:12:58.939 fused_ordering(893) 00:12:58.939 fused_ordering(894) 00:12:58.939 fused_ordering(895) 00:12:58.939 fused_ordering(896) 00:12:58.939 fused_ordering(897) 00:12:58.939 fused_ordering(898) 00:12:58.939 fused_ordering(899) 00:12:58.939 fused_ordering(900) 00:12:58.939 fused_ordering(901) 00:12:58.939 fused_ordering(902) 00:12:58.939 fused_ordering(903) 00:12:58.939 fused_ordering(904) 00:12:58.939 fused_ordering(905) 00:12:58.939 fused_ordering(906) 00:12:58.939 fused_ordering(907) 00:12:58.939 fused_ordering(908) 00:12:58.939 fused_ordering(909) 00:12:58.939 fused_ordering(910) 00:12:58.939 fused_ordering(911) 00:12:58.939 fused_ordering(912) 00:12:58.939 fused_ordering(913) 00:12:58.939 fused_ordering(914) 00:12:58.939 fused_ordering(915) 00:12:58.939 fused_ordering(916) 00:12:58.939 fused_ordering(917) 00:12:58.939 fused_ordering(918) 00:12:58.939 fused_ordering(919) 00:12:58.939 fused_ordering(920) 00:12:58.939 fused_ordering(921) 00:12:58.939 fused_ordering(922) 00:12:58.939 fused_ordering(923) 00:12:58.939 fused_ordering(924) 00:12:58.939 fused_ordering(925) 00:12:58.939 fused_ordering(926) 00:12:58.939 fused_ordering(927) 00:12:58.939 fused_ordering(928) 00:12:58.939 fused_ordering(929) 00:12:58.939 fused_ordering(930) 00:12:58.939 fused_ordering(931) 00:12:58.939 fused_ordering(932) 00:12:58.939 fused_ordering(933) 00:12:58.939 fused_ordering(934) 00:12:58.939 fused_ordering(935) 00:12:58.939 fused_ordering(936) 00:12:58.939 fused_ordering(937) 00:12:58.939 fused_ordering(938) 00:12:58.939 fused_ordering(939) 00:12:58.939 fused_ordering(940) 00:12:58.939 fused_ordering(941) 00:12:58.939 fused_ordering(942) 00:12:58.939 fused_ordering(943) 00:12:58.939 fused_ordering(944) 00:12:58.939 fused_ordering(945) 00:12:58.939 fused_ordering(946) 00:12:58.939 fused_ordering(947) 00:12:58.939 fused_ordering(948) 00:12:58.939 fused_ordering(949) 00:12:58.939 fused_ordering(950) 00:12:58.939 fused_ordering(951) 00:12:58.939 fused_ordering(952) 00:12:58.939 fused_ordering(953) 00:12:58.939 fused_ordering(954) 00:12:58.939 fused_ordering(955) 00:12:58.939 fused_ordering(956) 00:12:58.939 fused_ordering(957) 00:12:58.939 fused_ordering(958) 00:12:58.939 fused_ordering(959) 00:12:58.939 fused_ordering(960) 00:12:58.939 fused_ordering(961) 00:12:58.939 fused_ordering(962) 00:12:58.939 fused_ordering(963) 00:12:58.939 fused_ordering(964) 00:12:58.939 fused_ordering(965) 00:12:58.939 fused_ordering(966) 00:12:58.939 fused_ordering(967) 00:12:58.939 fused_ordering(968) 00:12:58.939 fused_ordering(969) 00:12:58.939 fused_ordering(970) 00:12:58.939 fused_ordering(971) 00:12:58.939 fused_ordering(972) 00:12:58.939 fused_ordering(973) 00:12:58.939 fused_ordering(974) 00:12:58.939 fused_ordering(975) 00:12:58.939 fused_ordering(976) 00:12:58.939 fused_ordering(977) 00:12:58.939 fused_ordering(978) 00:12:58.939 fused_ordering(979) 00:12:58.939 fused_ordering(980) 00:12:58.939 fused_ordering(981) 00:12:58.939 fused_ordering(982) 00:12:58.939 fused_ordering(983) 00:12:58.939 fused_ordering(984) 00:12:58.939 fused_ordering(985) 00:12:58.939 fused_ordering(986) 00:12:58.939 fused_ordering(987) 00:12:58.939 fused_ordering(988) 00:12:58.939 fused_ordering(989) 00:12:58.939 fused_ordering(990) 00:12:58.939 fused_ordering(991) 00:12:58.939 fused_ordering(992) 00:12:58.939 fused_ordering(993) 00:12:58.939 fused_ordering(994) 00:12:58.939 fused_ordering(995) 00:12:58.939 fused_ordering(996) 00:12:58.939 fused_ordering(997) 00:12:58.939 fused_ordering(998) 00:12:58.939 fused_ordering(999) 00:12:58.939 fused_ordering(1000) 00:12:58.939 fused_ordering(1001) 00:12:58.939 fused_ordering(1002) 00:12:58.939 fused_ordering(1003) 00:12:58.939 fused_ordering(1004) 00:12:58.939 fused_ordering(1005) 00:12:58.939 fused_ordering(1006) 00:12:58.939 fused_ordering(1007) 00:12:58.939 fused_ordering(1008) 00:12:58.939 fused_ordering(1009) 00:12:58.939 fused_ordering(1010) 00:12:58.939 fused_ordering(1011) 00:12:58.939 fused_ordering(1012) 00:12:58.939 fused_ordering(1013) 00:12:58.939 fused_ordering(1014) 00:12:58.939 fused_ordering(1015) 00:12:58.939 fused_ordering(1016) 00:12:58.939 fused_ordering(1017) 00:12:58.939 fused_ordering(1018) 00:12:58.939 fused_ordering(1019) 00:12:58.939 fused_ordering(1020) 00:12:58.939 fused_ordering(1021) 00:12:58.939 fused_ordering(1022) 00:12:58.939 fused_ordering(1023) 00:12:58.939 06:53:06 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:58.939 06:53:06 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:58.939 06:53:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:58.939 06:53:06 -- nvmf/common.sh@116 -- # sync 00:12:58.939 06:53:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:58.939 06:53:06 -- nvmf/common.sh@119 -- # set +e 00:12:58.939 06:53:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:58.939 06:53:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:58.939 rmmod nvme_tcp 00:12:59.197 rmmod nvme_fabrics 00:12:59.197 rmmod nvme_keyring 00:12:59.197 06:53:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:59.197 06:53:06 -- nvmf/common.sh@123 -- # set -e 00:12:59.197 06:53:06 -- nvmf/common.sh@124 -- # return 0 00:12:59.197 06:53:06 -- nvmf/common.sh@477 -- # '[' -n 2994868 ']' 00:12:59.197 06:53:06 -- nvmf/common.sh@478 -- # killprocess 2994868 00:12:59.197 06:53:06 -- common/autotest_common.sh@926 -- # '[' -z 2994868 ']' 00:12:59.197 06:53:06 -- common/autotest_common.sh@930 -- # kill -0 2994868 00:12:59.197 06:53:06 -- common/autotest_common.sh@931 -- # uname 00:12:59.197 06:53:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:59.197 06:53:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2994868 00:12:59.197 06:53:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:59.197 06:53:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:59.197 06:53:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2994868' 00:12:59.197 killing process with pid 2994868 00:12:59.197 06:53:06 -- common/autotest_common.sh@945 -- # kill 2994868 00:12:59.197 06:53:06 -- common/autotest_common.sh@950 -- # wait 2994868 00:12:59.455 06:53:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:59.455 06:53:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:59.455 06:53:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:59.455 06:53:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.455 06:53:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:59.455 06:53:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.455 06:53:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.455 06:53:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.363 06:53:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:01.363 00:13:01.363 real 0m9.207s 00:13:01.363 user 0m7.237s 00:13:01.363 sys 0m3.910s 00:13:01.363 06:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.363 06:53:08 -- common/autotest_common.sh@10 -- # set +x 00:13:01.363 ************************************ 00:13:01.363 END TEST nvmf_fused_ordering 00:13:01.363 ************************************ 00:13:01.363 06:53:08 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:01.363 06:53:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:01.363 06:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:01.363 06:53:08 -- common/autotest_common.sh@10 -- # set +x 00:13:01.363 ************************************ 00:13:01.363 START TEST nvmf_delete_subsystem 00:13:01.363 ************************************ 00:13:01.363 06:53:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:01.622 * Looking for test storage... 00:13:01.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.622 06:53:08 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.622 06:53:08 -- nvmf/common.sh@7 -- # uname -s 00:13:01.622 06:53:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.622 06:53:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.622 06:53:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.622 06:53:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.622 06:53:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.622 06:53:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.622 06:53:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.622 06:53:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.622 06:53:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.622 06:53:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.622 06:53:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:01.622 06:53:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:01.622 06:53:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.622 06:53:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.622 06:53:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.622 06:53:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.622 06:53:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.622 06:53:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.622 06:53:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.622 06:53:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.622 06:53:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.622 06:53:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.622 06:53:08 -- paths/export.sh@5 -- # export PATH 00:13:01.622 06:53:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.622 06:53:08 -- nvmf/common.sh@46 -- # : 0 00:13:01.622 06:53:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:01.622 06:53:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:01.622 06:53:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:01.622 06:53:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.622 06:53:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.622 06:53:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:01.622 06:53:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:01.622 06:53:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:01.622 06:53:08 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:01.622 06:53:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:01.622 06:53:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.622 06:53:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:01.622 06:53:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:01.622 06:53:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:01.622 06:53:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.622 06:53:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.622 06:53:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.622 06:53:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:01.622 06:53:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:01.622 06:53:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:01.622 06:53:08 -- common/autotest_common.sh@10 -- # set +x 00:13:03.536 06:53:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:03.536 06:53:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:03.536 06:53:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:03.536 06:53:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:03.536 06:53:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:03.536 06:53:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:03.536 06:53:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:03.536 06:53:10 -- nvmf/common.sh@294 -- # net_devs=() 00:13:03.536 06:53:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:03.536 06:53:10 -- nvmf/common.sh@295 -- # e810=() 00:13:03.536 06:53:10 -- nvmf/common.sh@295 -- # local -ga e810 00:13:03.536 06:53:10 -- nvmf/common.sh@296 -- # x722=() 00:13:03.536 06:53:10 -- nvmf/common.sh@296 -- # local -ga x722 00:13:03.536 06:53:10 -- nvmf/common.sh@297 -- # mlx=() 00:13:03.536 06:53:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:03.536 06:53:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.536 06:53:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.536 06:53:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.536 06:53:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.536 06:53:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.536 06:53:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.536 06:53:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.536 06:53:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.536 06:53:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.536 06:53:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.536 06:53:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.536 06:53:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:03.536 06:53:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:03.536 06:53:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:03.536 06:53:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:03.536 06:53:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:03.536 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:03.536 06:53:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:03.536 06:53:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:03.536 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:03.536 06:53:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:03.536 06:53:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:03.536 06:53:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.536 06:53:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:03.536 06:53:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.536 06:53:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:03.536 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:03.536 06:53:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.536 06:53:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:03.536 06:53:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.536 06:53:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:03.536 06:53:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.536 06:53:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:03.536 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:03.536 06:53:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.536 06:53:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:03.536 06:53:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:03.536 06:53:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:03.536 06:53:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.536 06:53:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.536 06:53:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.536 06:53:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:03.536 06:53:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.536 06:53:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.536 06:53:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:03.536 06:53:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.536 06:53:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.536 06:53:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:03.536 06:53:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:03.536 06:53:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.536 06:53:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.536 06:53:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.536 06:53:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.536 06:53:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:03.536 06:53:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.536 06:53:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.536 06:53:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.536 06:53:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:03.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:13:03.536 00:13:03.536 --- 10.0.0.2 ping statistics --- 00:13:03.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.536 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:13:03.536 06:53:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:13:03.536 00:13:03.536 --- 10.0.0.1 ping statistics --- 00:13:03.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.536 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:13:03.536 06:53:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.536 06:53:10 -- nvmf/common.sh@410 -- # return 0 00:13:03.536 06:53:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:03.536 06:53:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.536 06:53:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:03.536 06:53:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.536 06:53:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:03.536 06:53:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:03.536 06:53:10 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:03.536 06:53:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:03.536 06:53:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:03.536 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:13:03.794 06:53:10 -- nvmf/common.sh@469 -- # nvmfpid=2997387 00:13:03.794 06:53:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:03.794 06:53:10 -- nvmf/common.sh@470 -- # waitforlisten 2997387 00:13:03.794 06:53:10 -- common/autotest_common.sh@819 -- # '[' -z 2997387 ']' 00:13:03.794 06:53:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.794 06:53:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:03.794 06:53:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.794 06:53:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:03.794 06:53:10 -- common/autotest_common.sh@10 -- # set +x 00:13:03.794 [2024-05-12 06:53:10.707187] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:03.794 [2024-05-12 06:53:10.707275] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.794 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.794 [2024-05-12 06:53:10.773910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:03.794 [2024-05-12 06:53:10.883322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:03.794 [2024-05-12 06:53:10.883481] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.794 [2024-05-12 06:53:10.883499] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.794 [2024-05-12 06:53:10.883515] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.794 [2024-05-12 06:53:10.883583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.794 [2024-05-12 06:53:10.883587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.728 06:53:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:04.728 06:53:11 -- common/autotest_common.sh@852 -- # return 0 00:13:04.728 06:53:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:04.728 06:53:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:04.728 06:53:11 -- common/autotest_common.sh@10 -- # set +x 00:13:04.728 06:53:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.728 06:53:11 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:04.728 06:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.728 06:53:11 -- common/autotest_common.sh@10 -- # set +x 00:13:04.728 [2024-05-12 06:53:11.696657] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.728 06:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.728 06:53:11 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:04.728 06:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.728 06:53:11 -- common/autotest_common.sh@10 -- # set +x 00:13:04.728 06:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.728 06:53:11 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.728 06:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.728 06:53:11 -- common/autotest_common.sh@10 -- # set +x 00:13:04.728 [2024-05-12 06:53:11.712846] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.728 06:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.728 06:53:11 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:04.728 06:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.728 06:53:11 -- common/autotest_common.sh@10 -- # set +x 00:13:04.728 NULL1 00:13:04.728 06:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.728 06:53:11 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:04.728 06:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.728 06:53:11 -- common/autotest_common.sh@10 -- # set +x 00:13:04.728 Delay0 00:13:04.728 06:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.728 06:53:11 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.728 06:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.728 06:53:11 -- common/autotest_common.sh@10 -- # set +x 00:13:04.728 06:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.728 06:53:11 -- target/delete_subsystem.sh@28 -- # perf_pid=2997546 00:13:04.728 06:53:11 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:04.728 06:53:11 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:04.728 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.728 [2024-05-12 06:53:11.787571] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:06.623 06:53:13 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.623 06:53:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.623 06:53:13 -- common/autotest_common.sh@10 -- # set +x 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 [2024-05-12 06:53:13.916827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d49b0 is same with the state(5) to be set 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Write completed with error (sct=0, sc=8) 00:13:06.881 Read completed with error (sct=0, sc=8) 00:13:06.881 starting I/O failed: -6 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 starting I/O failed: -6 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 starting I/O failed: -6 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 starting I/O failed: -6 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 starting I/O failed: -6 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 starting I/O failed: -6 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 starting I/O failed: -6 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 [2024-05-12 06:53:13.918329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f507000c1d0 is same with the state(5) to be set 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Write completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:06.882 Read completed with error (sct=0, sc=8) 00:13:07.815 [2024-05-12 06:53:14.889314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f35a0 is same with the state(5) to be set 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 [2024-05-12 06:53:14.918166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4c60 is same with the state(5) to be set 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Write completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.815 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 [2024-05-12 06:53:14.920029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f507000bf20 is same with the state(5) to be set 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 [2024-05-12 06:53:14.920801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f507000c480 is same with the state(5) to be set 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 Read completed with error (sct=0, sc=8) 00:13:07.816 Write completed with error (sct=0, sc=8) 00:13:07.816 [2024-05-12 06:53:14.920994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3f90 is same with the state(5) to be set 00:13:07.816 [2024-05-12 06:53:14.922040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f35a0 (9): Bad file descriptor 00:13:07.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:07.816 06:53:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.816 06:53:14 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:07.816 06:53:14 -- target/delete_subsystem.sh@35 -- # kill -0 2997546 00:13:07.816 06:53:14 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:07.816 Initializing NVMe Controllers 00:13:07.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:07.816 Controller IO queue size 128, less than required. 00:13:07.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:07.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:07.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:07.816 Initialization complete. Launching workers. 00:13:07.816 ======================================================== 00:13:07.816 Latency(us) 00:13:07.816 Device Information : IOPS MiB/s Average min max 00:13:07.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.38 0.08 894143.07 430.31 1010750.99 00:13:07.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.96 0.08 999457.85 327.84 2003913.87 00:13:07.816 ======================================================== 00:13:07.816 Total : 329.34 0.16 944973.73 327.84 2003913.87 00:13:07.816 00:13:08.381 06:53:15 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:08.381 06:53:15 -- target/delete_subsystem.sh@35 -- # kill -0 2997546 00:13:08.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2997546) - No such process 00:13:08.381 06:53:15 -- target/delete_subsystem.sh@45 -- # NOT wait 2997546 00:13:08.381 06:53:15 -- common/autotest_common.sh@640 -- # local es=0 00:13:08.381 06:53:15 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 2997546 00:13:08.381 06:53:15 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:08.381 06:53:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:08.381 06:53:15 -- common/autotest_common.sh@632 -- # type -t wait 00:13:08.381 06:53:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:08.381 06:53:15 -- common/autotest_common.sh@643 -- # wait 2997546 00:13:08.381 06:53:15 -- common/autotest_common.sh@643 -- # es=1 00:13:08.381 06:53:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:08.381 06:53:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:08.381 06:53:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:08.381 06:53:15 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:08.381 06:53:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.381 06:53:15 -- common/autotest_common.sh@10 -- # set +x 00:13:08.381 06:53:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.381 06:53:15 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.381 06:53:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.381 06:53:15 -- common/autotest_common.sh@10 -- # set +x 00:13:08.381 [2024-05-12 06:53:15.446594] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.381 06:53:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.381 06:53:15 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.381 06:53:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.381 06:53:15 -- common/autotest_common.sh@10 -- # set +x 00:13:08.381 06:53:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.381 06:53:15 -- target/delete_subsystem.sh@54 -- # perf_pid=2997956 00:13:08.381 06:53:15 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:08.381 06:53:15 -- target/delete_subsystem.sh@57 -- # kill -0 2997956 00:13:08.381 06:53:15 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:08.381 06:53:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:08.381 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.381 [2024-05-12 06:53:15.509643] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:08.947 06:53:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:08.947 06:53:15 -- target/delete_subsystem.sh@57 -- # kill -0 2997956 00:13:08.947 06:53:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:09.512 06:53:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:09.512 06:53:16 -- target/delete_subsystem.sh@57 -- # kill -0 2997956 00:13:09.512 06:53:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:10.077 06:53:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:10.077 06:53:16 -- target/delete_subsystem.sh@57 -- # kill -0 2997956 00:13:10.077 06:53:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:10.642 06:53:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:10.642 06:53:17 -- target/delete_subsystem.sh@57 -- # kill -0 2997956 00:13:10.642 06:53:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:10.900 06:53:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:10.900 06:53:17 -- target/delete_subsystem.sh@57 -- # kill -0 2997956 00:13:10.900 06:53:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:11.465 06:53:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:11.465 06:53:18 -- target/delete_subsystem.sh@57 -- # kill -0 2997956 00:13:11.465 06:53:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:11.724 Initializing NVMe Controllers 00:13:11.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:11.724 Controller IO queue size 128, less than required. 00:13:11.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:11.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:11.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:11.724 Initialization complete. Launching workers. 00:13:11.724 ======================================================== 00:13:11.724 Latency(us) 00:13:11.724 Device Information : IOPS MiB/s Average min max 00:13:11.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004748.46 1000267.58 1012099.52 00:13:11.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005653.92 1000308.58 1015026.46 00:13:11.724 ======================================================== 00:13:11.724 Total : 256.00 0.12 1005201.19 1000267.58 1015026.46 00:13:11.724 00:13:11.982 06:53:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:11.982 06:53:18 -- target/delete_subsystem.sh@57 -- # kill -0 2997956 00:13:11.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2997956) - No such process 00:13:11.982 06:53:18 -- target/delete_subsystem.sh@67 -- # wait 2997956 00:13:11.982 06:53:18 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:11.982 06:53:18 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:11.982 06:53:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:11.982 06:53:18 -- nvmf/common.sh@116 -- # sync 00:13:11.982 06:53:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:11.982 06:53:18 -- nvmf/common.sh@119 -- # set +e 00:13:11.982 06:53:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:11.982 06:53:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:11.982 rmmod nvme_tcp 00:13:11.982 rmmod nvme_fabrics 00:13:11.982 rmmod nvme_keyring 00:13:11.982 06:53:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:11.982 06:53:19 -- nvmf/common.sh@123 -- # set -e 00:13:11.982 06:53:19 -- nvmf/common.sh@124 -- # return 0 00:13:11.982 06:53:19 -- nvmf/common.sh@477 -- # '[' -n 2997387 ']' 00:13:11.982 06:53:19 -- nvmf/common.sh@478 -- # killprocess 2997387 00:13:11.982 06:53:19 -- common/autotest_common.sh@926 -- # '[' -z 2997387 ']' 00:13:11.982 06:53:19 -- common/autotest_common.sh@930 -- # kill -0 2997387 00:13:11.982 06:53:19 -- common/autotest_common.sh@931 -- # uname 00:13:11.982 06:53:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:11.982 06:53:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2997387 00:13:11.982 06:53:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:11.982 06:53:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:11.982 06:53:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2997387' 00:13:11.982 killing process with pid 2997387 00:13:11.982 06:53:19 -- common/autotest_common.sh@945 -- # kill 2997387 00:13:11.982 06:53:19 -- common/autotest_common.sh@950 -- # wait 2997387 00:13:12.242 06:53:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:12.242 06:53:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:12.242 06:53:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:12.242 06:53:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.242 06:53:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:12.242 06:53:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.242 06:53:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.242 06:53:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.809 06:53:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:14.809 00:13:14.809 real 0m12.919s 00:13:14.809 user 0m29.263s 00:13:14.809 sys 0m2.901s 00:13:14.809 06:53:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.809 06:53:21 -- common/autotest_common.sh@10 -- # set +x 00:13:14.809 ************************************ 00:13:14.809 END TEST nvmf_delete_subsystem 00:13:14.809 ************************************ 00:13:14.809 06:53:21 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:13:14.809 06:53:21 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:14.809 06:53:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:14.809 06:53:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:14.809 06:53:21 -- common/autotest_common.sh@10 -- # set +x 00:13:14.809 ************************************ 00:13:14.809 START TEST nvmf_nvme_cli 00:13:14.809 ************************************ 00:13:14.809 06:53:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:14.809 * Looking for test storage... 00:13:14.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.809 06:53:21 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.809 06:53:21 -- nvmf/common.sh@7 -- # uname -s 00:13:14.809 06:53:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.809 06:53:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.809 06:53:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.809 06:53:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.809 06:53:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.809 06:53:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.809 06:53:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.809 06:53:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.809 06:53:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.809 06:53:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.809 06:53:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.809 06:53:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.809 06:53:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.809 06:53:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.809 06:53:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.809 06:53:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.809 06:53:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.809 06:53:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.809 06:53:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.809 06:53:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.809 06:53:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.809 06:53:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.809 06:53:21 -- paths/export.sh@5 -- # export PATH 00:13:14.809 06:53:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.809 06:53:21 -- nvmf/common.sh@46 -- # : 0 00:13:14.810 06:53:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:14.810 06:53:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:14.810 06:53:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:14.810 06:53:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.810 06:53:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.810 06:53:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:14.810 06:53:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:14.810 06:53:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:14.810 06:53:21 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.810 06:53:21 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.810 06:53:21 -- target/nvme_cli.sh@14 -- # devs=() 00:13:14.810 06:53:21 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:14.810 06:53:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:14.810 06:53:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.810 06:53:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:14.810 06:53:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:14.810 06:53:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:14.810 06:53:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.810 06:53:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.810 06:53:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.810 06:53:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:14.810 06:53:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:14.810 06:53:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:14.810 06:53:21 -- common/autotest_common.sh@10 -- # set +x 00:13:16.712 06:53:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:16.712 06:53:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:16.712 06:53:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:16.712 06:53:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:16.712 06:53:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:16.712 06:53:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:16.712 06:53:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:16.712 06:53:23 -- nvmf/common.sh@294 -- # net_devs=() 00:13:16.712 06:53:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:16.712 06:53:23 -- nvmf/common.sh@295 -- # e810=() 00:13:16.712 06:53:23 -- nvmf/common.sh@295 -- # local -ga e810 00:13:16.712 06:53:23 -- nvmf/common.sh@296 -- # x722=() 00:13:16.712 06:53:23 -- nvmf/common.sh@296 -- # local -ga x722 00:13:16.712 06:53:23 -- nvmf/common.sh@297 -- # mlx=() 00:13:16.712 06:53:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:16.712 06:53:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.712 06:53:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.712 06:53:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.713 06:53:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.713 06:53:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.713 06:53:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.713 06:53:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.713 06:53:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.713 06:53:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.713 06:53:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.713 06:53:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.713 06:53:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:16.713 06:53:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:16.713 06:53:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:16.713 06:53:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:16.713 06:53:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:16.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:16.713 06:53:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:16.713 06:53:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:16.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:16.713 06:53:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:16.713 06:53:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:16.713 06:53:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.713 06:53:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:16.713 06:53:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.713 06:53:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:16.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:16.713 06:53:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.713 06:53:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:16.713 06:53:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.713 06:53:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:16.713 06:53:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.713 06:53:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:16.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:16.713 06:53:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.713 06:53:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:16.713 06:53:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:16.713 06:53:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:16.713 06:53:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.713 06:53:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.713 06:53:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.713 06:53:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:16.713 06:53:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.713 06:53:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.713 06:53:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:16.713 06:53:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.713 06:53:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.713 06:53:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:16.713 06:53:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:16.713 06:53:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.713 06:53:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.713 06:53:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.713 06:53:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.713 06:53:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:16.713 06:53:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.713 06:53:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.713 06:53:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.713 06:53:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:16.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:13:16.713 00:13:16.713 --- 10.0.0.2 ping statistics --- 00:13:16.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.713 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:13:16.713 06:53:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:13:16.713 00:13:16.713 --- 10.0.0.1 ping statistics --- 00:13:16.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.713 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:13:16.713 06:53:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.713 06:53:23 -- nvmf/common.sh@410 -- # return 0 00:13:16.713 06:53:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:16.713 06:53:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.713 06:53:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:16.713 06:53:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.713 06:53:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:16.713 06:53:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:16.713 06:53:23 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:16.713 06:53:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:16.713 06:53:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:16.713 06:53:23 -- common/autotest_common.sh@10 -- # set +x 00:13:16.713 06:53:23 -- nvmf/common.sh@469 -- # nvmfpid=3000440 00:13:16.713 06:53:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.713 06:53:23 -- nvmf/common.sh@470 -- # waitforlisten 3000440 00:13:16.713 06:53:23 -- common/autotest_common.sh@819 -- # '[' -z 3000440 ']' 00:13:16.713 06:53:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.713 06:53:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:16.713 06:53:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.713 06:53:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:16.713 06:53:23 -- common/autotest_common.sh@10 -- # set +x 00:13:16.713 [2024-05-12 06:53:23.761868] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:16.713 [2024-05-12 06:53:23.761963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.713 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.713 [2024-05-12 06:53:23.832468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.971 [2024-05-12 06:53:23.952486] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:16.971 [2024-05-12 06:53:23.952659] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.971 [2024-05-12 06:53:23.952678] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.971 [2024-05-12 06:53:23.952692] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.971 [2024-05-12 06:53:23.952775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.971 [2024-05-12 06:53:23.952831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.971 [2024-05-12 06:53:23.952881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.971 [2024-05-12 06:53:23.952885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.903 06:53:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:17.903 06:53:24 -- common/autotest_common.sh@852 -- # return 0 00:13:17.903 06:53:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:17.903 06:53:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:17.903 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.903 06:53:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.903 06:53:24 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.903 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.903 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.903 [2024-05-12 06:53:24.718160] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.903 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.903 06:53:24 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:17.903 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.903 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.903 Malloc0 00:13:17.903 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.903 06:53:24 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:17.903 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.903 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.903 Malloc1 00:13:17.903 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.903 06:53:24 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:17.903 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.903 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.903 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.903 06:53:24 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:17.903 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.903 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.903 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.903 06:53:24 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.903 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.903 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.903 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.903 06:53:24 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.903 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.903 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.903 [2024-05-12 06:53:24.799628] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.903 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.903 06:53:24 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:17.903 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.903 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.903 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.903 06:53:24 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:13:17.903 00:13:17.904 Discovery Log Number of Records 2, Generation counter 2 00:13:17.904 =====Discovery Log Entry 0====== 00:13:17.904 trtype: tcp 00:13:17.904 adrfam: ipv4 00:13:17.904 subtype: current discovery subsystem 00:13:17.904 treq: not required 00:13:17.904 portid: 0 00:13:17.904 trsvcid: 4420 00:13:17.904 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:17.904 traddr: 10.0.0.2 00:13:17.904 eflags: explicit discovery connections, duplicate discovery information 00:13:17.904 sectype: none 00:13:17.904 =====Discovery Log Entry 1====== 00:13:17.904 trtype: tcp 00:13:17.904 adrfam: ipv4 00:13:17.904 subtype: nvme subsystem 00:13:17.904 treq: not required 00:13:17.904 portid: 0 00:13:17.904 trsvcid: 4420 00:13:17.904 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:17.904 traddr: 10.0.0.2 00:13:17.904 eflags: none 00:13:17.904 sectype: none 00:13:17.904 06:53:24 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:17.904 06:53:24 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:17.904 06:53:24 -- nvmf/common.sh@510 -- # local dev _ 00:13:17.904 06:53:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.904 06:53:24 -- nvmf/common.sh@509 -- # nvme list 00:13:17.904 06:53:24 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:17.904 06:53:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.904 06:53:24 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:17.904 06:53:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.904 06:53:24 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:17.904 06:53:24 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.470 06:53:25 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:18.470 06:53:25 -- common/autotest_common.sh@1177 -- # local i=0 00:13:18.470 06:53:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.470 06:53:25 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:13:18.470 06:53:25 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:13:18.470 06:53:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:20.999 06:53:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:20.999 06:53:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:20.999 06:53:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.999 06:53:27 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:13:20.999 06:53:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.999 06:53:27 -- common/autotest_common.sh@1187 -- # return 0 00:13:20.999 06:53:27 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:20.999 06:53:27 -- nvmf/common.sh@510 -- # local dev _ 00:13:20.999 06:53:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:20.999 06:53:27 -- nvmf/common.sh@509 -- # nvme list 00:13:20.999 06:53:27 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:20.999 06:53:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:20.999 06:53:27 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:20.999 06:53:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:20.999 06:53:27 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:20.999 06:53:27 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:13:20.999 06:53:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:20.999 06:53:27 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:20.999 06:53:27 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:13:20.999 06:53:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:20.999 06:53:27 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:20.999 /dev/nvme0n1 ]] 00:13:20.999 06:53:27 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:20.999 06:53:27 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:20.999 06:53:27 -- nvmf/common.sh@510 -- # local dev _ 00:13:20.999 06:53:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:20.999 06:53:27 -- nvmf/common.sh@509 -- # nvme list 00:13:20.999 06:53:27 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:20.999 06:53:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:20.999 06:53:27 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:20.999 06:53:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:20.999 06:53:27 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:20.999 06:53:27 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:13:20.999 06:53:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:20.999 06:53:27 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:20.999 06:53:27 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:13:20.999 06:53:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:20.999 06:53:27 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:20.999 06:53:27 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.999 06:53:27 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.999 06:53:27 -- common/autotest_common.sh@1198 -- # local i=0 00:13:20.999 06:53:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:20.999 06:53:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.999 06:53:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:20.999 06:53:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.999 06:53:27 -- common/autotest_common.sh@1210 -- # return 0 00:13:20.999 06:53:27 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:20.999 06:53:27 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.999 06:53:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.999 06:53:27 -- common/autotest_common.sh@10 -- # set +x 00:13:20.999 06:53:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.999 06:53:27 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:20.999 06:53:27 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:20.999 06:53:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:20.999 06:53:27 -- nvmf/common.sh@116 -- # sync 00:13:20.999 06:53:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:20.999 06:53:27 -- nvmf/common.sh@119 -- # set +e 00:13:20.999 06:53:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:20.999 06:53:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:20.999 rmmod nvme_tcp 00:13:20.999 rmmod nvme_fabrics 00:13:20.999 rmmod nvme_keyring 00:13:20.999 06:53:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:20.999 06:53:27 -- nvmf/common.sh@123 -- # set -e 00:13:20.999 06:53:27 -- nvmf/common.sh@124 -- # return 0 00:13:20.999 06:53:27 -- nvmf/common.sh@477 -- # '[' -n 3000440 ']' 00:13:20.999 06:53:27 -- nvmf/common.sh@478 -- # killprocess 3000440 00:13:20.999 06:53:27 -- common/autotest_common.sh@926 -- # '[' -z 3000440 ']' 00:13:20.999 06:53:27 -- common/autotest_common.sh@930 -- # kill -0 3000440 00:13:20.999 06:53:27 -- common/autotest_common.sh@931 -- # uname 00:13:20.999 06:53:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:20.999 06:53:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3000440 00:13:20.999 06:53:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:20.999 06:53:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:20.999 06:53:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3000440' 00:13:20.999 killing process with pid 3000440 00:13:20.999 06:53:27 -- common/autotest_common.sh@945 -- # kill 3000440 00:13:20.999 06:53:27 -- common/autotest_common.sh@950 -- # wait 3000440 00:13:20.999 06:53:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:20.999 06:53:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:20.999 06:53:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:20.999 06:53:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.999 06:53:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:20.999 06:53:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.999 06:53:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.999 06:53:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.534 06:53:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:23.534 00:13:23.534 real 0m8.661s 00:13:23.534 user 0m16.750s 00:13:23.534 sys 0m2.193s 00:13:23.534 06:53:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.534 06:53:30 -- common/autotest_common.sh@10 -- # set +x 00:13:23.534 ************************************ 00:13:23.534 END TEST nvmf_nvme_cli 00:13:23.534 ************************************ 00:13:23.534 06:53:30 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:23.534 06:53:30 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:23.534 06:53:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:23.534 06:53:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:23.534 06:53:30 -- common/autotest_common.sh@10 -- # set +x 00:13:23.534 ************************************ 00:13:23.534 START TEST nvmf_host_management 00:13:23.534 ************************************ 00:13:23.534 06:53:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:23.534 * Looking for test storage... 00:13:23.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.534 06:53:30 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.534 06:53:30 -- nvmf/common.sh@7 -- # uname -s 00:13:23.534 06:53:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.534 06:53:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.534 06:53:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.534 06:53:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.534 06:53:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.534 06:53:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.534 06:53:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.534 06:53:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.534 06:53:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.534 06:53:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.534 06:53:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.534 06:53:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.534 06:53:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.534 06:53:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.534 06:53:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.534 06:53:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.534 06:53:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.534 06:53:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.534 06:53:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.534 06:53:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.535 06:53:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.535 06:53:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.535 06:53:30 -- paths/export.sh@5 -- # export PATH 00:13:23.535 06:53:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.535 06:53:30 -- nvmf/common.sh@46 -- # : 0 00:13:23.535 06:53:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:23.535 06:53:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:23.535 06:53:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:23.535 06:53:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.535 06:53:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.535 06:53:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:23.535 06:53:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:23.535 06:53:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:23.535 06:53:30 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:23.535 06:53:30 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:23.535 06:53:30 -- target/host_management.sh@104 -- # nvmftestinit 00:13:23.535 06:53:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:23.535 06:53:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.535 06:53:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:23.535 06:53:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:23.535 06:53:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:23.535 06:53:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.535 06:53:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.535 06:53:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.535 06:53:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:23.535 06:53:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:23.535 06:53:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:23.535 06:53:30 -- common/autotest_common.sh@10 -- # set +x 00:13:25.437 06:53:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:25.437 06:53:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:25.437 06:53:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:25.437 06:53:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:25.437 06:53:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:25.437 06:53:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:25.437 06:53:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:25.437 06:53:32 -- nvmf/common.sh@294 -- # net_devs=() 00:13:25.437 06:53:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:25.437 06:53:32 -- nvmf/common.sh@295 -- # e810=() 00:13:25.437 06:53:32 -- nvmf/common.sh@295 -- # local -ga e810 00:13:25.437 06:53:32 -- nvmf/common.sh@296 -- # x722=() 00:13:25.437 06:53:32 -- nvmf/common.sh@296 -- # local -ga x722 00:13:25.437 06:53:32 -- nvmf/common.sh@297 -- # mlx=() 00:13:25.437 06:53:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:25.437 06:53:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.437 06:53:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.437 06:53:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.437 06:53:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.437 06:53:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.437 06:53:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.437 06:53:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.437 06:53:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.437 06:53:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.437 06:53:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.437 06:53:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.437 06:53:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:25.437 06:53:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:25.437 06:53:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:25.437 06:53:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:25.437 06:53:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:25.437 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:25.437 06:53:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:25.437 06:53:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:25.437 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:25.437 06:53:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:25.437 06:53:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:25.437 06:53:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.437 06:53:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:25.437 06:53:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.437 06:53:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:25.437 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:25.437 06:53:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.437 06:53:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:25.437 06:53:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.437 06:53:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:25.437 06:53:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.437 06:53:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:25.437 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:25.437 06:53:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.437 06:53:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:25.437 06:53:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:25.437 06:53:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:25.437 06:53:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.437 06:53:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.437 06:53:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.437 06:53:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:25.437 06:53:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.437 06:53:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.437 06:53:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:25.437 06:53:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.437 06:53:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.437 06:53:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:25.437 06:53:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:25.437 06:53:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.437 06:53:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.437 06:53:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.437 06:53:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.437 06:53:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:25.437 06:53:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.437 06:53:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.437 06:53:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.437 06:53:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:25.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:13:25.437 00:13:25.437 --- 10.0.0.2 ping statistics --- 00:13:25.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.437 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:13:25.437 06:53:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:25.437 00:13:25.437 --- 10.0.0.1 ping statistics --- 00:13:25.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.437 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:25.437 06:53:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.437 06:53:32 -- nvmf/common.sh@410 -- # return 0 00:13:25.437 06:53:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:25.437 06:53:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.437 06:53:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:25.437 06:53:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:25.438 06:53:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.438 06:53:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:25.438 06:53:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:25.438 06:53:32 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:25.438 06:53:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:25.438 06:53:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:25.438 06:53:32 -- common/autotest_common.sh@10 -- # set +x 00:13:25.438 ************************************ 00:13:25.438 START TEST nvmf_host_management 00:13:25.438 ************************************ 00:13:25.438 06:53:32 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:13:25.438 06:53:32 -- target/host_management.sh@69 -- # starttarget 00:13:25.438 06:53:32 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:25.438 06:53:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:25.438 06:53:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:25.438 06:53:32 -- common/autotest_common.sh@10 -- # set +x 00:13:25.438 06:53:32 -- nvmf/common.sh@469 -- # nvmfpid=3002858 00:13:25.438 06:53:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:25.438 06:53:32 -- nvmf/common.sh@470 -- # waitforlisten 3002858 00:13:25.438 06:53:32 -- common/autotest_common.sh@819 -- # '[' -z 3002858 ']' 00:13:25.438 06:53:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.438 06:53:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:25.438 06:53:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.438 06:53:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:25.438 06:53:32 -- common/autotest_common.sh@10 -- # set +x 00:13:25.438 [2024-05-12 06:53:32.289950] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:25.438 [2024-05-12 06:53:32.290044] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.438 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.438 [2024-05-12 06:53:32.362059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.438 [2024-05-12 06:53:32.481620] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:25.438 [2024-05-12 06:53:32.481791] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.438 [2024-05-12 06:53:32.481811] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.438 [2024-05-12 06:53:32.481827] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.438 [2024-05-12 06:53:32.481914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.438 [2024-05-12 06:53:32.481968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.438 [2024-05-12 06:53:32.482018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:25.438 [2024-05-12 06:53:32.482021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.371 06:53:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:26.371 06:53:33 -- common/autotest_common.sh@852 -- # return 0 00:13:26.371 06:53:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:26.372 06:53:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:26.372 06:53:33 -- common/autotest_common.sh@10 -- # set +x 00:13:26.372 06:53:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.372 06:53:33 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.372 06:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.372 06:53:33 -- common/autotest_common.sh@10 -- # set +x 00:13:26.372 [2024-05-12 06:53:33.300247] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.372 06:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.372 06:53:33 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:26.372 06:53:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:26.372 06:53:33 -- common/autotest_common.sh@10 -- # set +x 00:13:26.372 06:53:33 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:26.372 06:53:33 -- target/host_management.sh@23 -- # cat 00:13:26.372 06:53:33 -- target/host_management.sh@30 -- # rpc_cmd 00:13:26.372 06:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.372 06:53:33 -- common/autotest_common.sh@10 -- # set +x 00:13:26.372 Malloc0 00:13:26.372 [2024-05-12 06:53:33.359495] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.372 06:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.372 06:53:33 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:26.372 06:53:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:26.372 06:53:33 -- common/autotest_common.sh@10 -- # set +x 00:13:26.372 06:53:33 -- target/host_management.sh@73 -- # perfpid=3003037 00:13:26.372 06:53:33 -- target/host_management.sh@74 -- # waitforlisten 3003037 /var/tmp/bdevperf.sock 00:13:26.372 06:53:33 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:26.372 06:53:33 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:26.372 06:53:33 -- common/autotest_common.sh@819 -- # '[' -z 3003037 ']' 00:13:26.372 06:53:33 -- nvmf/common.sh@520 -- # config=() 00:13:26.372 06:53:33 -- nvmf/common.sh@520 -- # local subsystem config 00:13:26.372 06:53:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:26.372 06:53:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:26.372 06:53:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:26.372 06:53:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:26.372 { 00:13:26.372 "params": { 00:13:26.372 "name": "Nvme$subsystem", 00:13:26.372 "trtype": "$TEST_TRANSPORT", 00:13:26.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:26.372 "adrfam": "ipv4", 00:13:26.372 "trsvcid": "$NVMF_PORT", 00:13:26.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:26.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:26.372 "hdgst": ${hdgst:-false}, 00:13:26.372 "ddgst": ${ddgst:-false} 00:13:26.372 }, 00:13:26.372 "method": "bdev_nvme_attach_controller" 00:13:26.372 } 00:13:26.372 EOF 00:13:26.372 )") 00:13:26.372 06:53:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:26.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:26.372 06:53:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:26.372 06:53:33 -- common/autotest_common.sh@10 -- # set +x 00:13:26.372 06:53:33 -- nvmf/common.sh@542 -- # cat 00:13:26.372 06:53:33 -- nvmf/common.sh@544 -- # jq . 00:13:26.372 06:53:33 -- nvmf/common.sh@545 -- # IFS=, 00:13:26.372 06:53:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:26.372 "params": { 00:13:26.372 "name": "Nvme0", 00:13:26.372 "trtype": "tcp", 00:13:26.372 "traddr": "10.0.0.2", 00:13:26.372 "adrfam": "ipv4", 00:13:26.372 "trsvcid": "4420", 00:13:26.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:26.372 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:26.372 "hdgst": false, 00:13:26.372 "ddgst": false 00:13:26.372 }, 00:13:26.372 "method": "bdev_nvme_attach_controller" 00:13:26.372 }' 00:13:26.372 [2024-05-12 06:53:33.427136] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:26.372 [2024-05-12 06:53:33.427217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003037 ] 00:13:26.372 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.372 [2024-05-12 06:53:33.490302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.629 [2024-05-12 06:53:33.598907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.886 Running I/O for 10 seconds... 00:13:27.452 06:53:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:27.452 06:53:34 -- common/autotest_common.sh@852 -- # return 0 00:13:27.452 06:53:34 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:27.452 06:53:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.452 06:53:34 -- common/autotest_common.sh@10 -- # set +x 00:13:27.452 06:53:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.452 06:53:34 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:27.452 06:53:34 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:27.452 06:53:34 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:27.452 06:53:34 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:27.452 06:53:34 -- target/host_management.sh@52 -- # local ret=1 00:13:27.452 06:53:34 -- target/host_management.sh@53 -- # local i 00:13:27.452 06:53:34 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:27.452 06:53:34 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:27.452 06:53:34 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:27.452 06:53:34 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:27.452 06:53:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.452 06:53:34 -- common/autotest_common.sh@10 -- # set +x 00:13:27.452 06:53:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.452 06:53:34 -- target/host_management.sh@55 -- # read_io_count=1219 00:13:27.452 06:53:34 -- target/host_management.sh@58 -- # '[' 1219 -ge 100 ']' 00:13:27.452 06:53:34 -- target/host_management.sh@59 -- # ret=0 00:13:27.452 06:53:34 -- target/host_management.sh@60 -- # break 00:13:27.452 06:53:34 -- target/host_management.sh@64 -- # return 0 00:13:27.452 06:53:34 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:27.452 06:53:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.452 06:53:34 -- common/autotest_common.sh@10 -- # set +x 00:13:27.452 [2024-05-12 06:53:34.467376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.467989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.468009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.468034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.468066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.468087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.452 [2024-05-12 06:53:34.468108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.453 [2024-05-12 06:53:34.468129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.453 [2024-05-12 06:53:34.468150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.453 [2024-05-12 06:53:34.468173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.453 [2024-05-12 06:53:34.468194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.453 [2024-05-12 06:53:34.468215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a480 is same with the state(5) to be set 00:13:27.453 [2024-05-12 06:53:34.471168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 06:53:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.453 [2024-05-12 06:53:34.471621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 06:53:34 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:27.453 [2024-05-12 06:53:34.471781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 06:53:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.453 [2024-05-12 06:53:34.471914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.471974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.471989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.472004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 06:53:34 -- common/autotest_common.sh@10 -- # set +x 00:13:27.453 [2024-05-12 06:53:34.472019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.472034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.472061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.472075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.472091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.472105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.472120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.472134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.472150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.472164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.472180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.472194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.472209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.472223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.472239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.472252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.472268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.472282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.472297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.472311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.453 [2024-05-12 06:53:34.472326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.453 [2024-05-12 06:53:34.472340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.472983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.472999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.473013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.473028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.473047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.473062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.473076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.473091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.473106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.473121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.473135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.473150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.454 [2024-05-12 06:53:34.473164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.473259] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ff4b50 was disconnected and freed. reset controller. 00:13:27.454 [2024-05-12 06:53:34.473329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.454 [2024-05-12 06:53:34.473432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.473454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.454 [2024-05-12 06:53:34.473468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.473482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.454 [2024-05-12 06:53:34.473495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.473509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.454 [2024-05-12 06:53:34.473523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.454 [2024-05-12 06:53:34.473536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff7400 is same with the state(5) to be set 00:13:27.454 [2024-05-12 06:53:34.474657] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:27.454 task offset: 36096 on job bdev=Nvme0n1 fails 00:13:27.454 00:13:27.454 Latency(us) 00:13:27.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.454 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:27.454 Job: Nvme0n1 ended in about 0.65 seconds with error 00:13:27.454 Verification LBA range: start 0x0 length 0x400 00:13:27.454 Nvme0n1 : 0.65 1997.20 124.83 99.16 0.00 30196.90 2730.67 31457.28 00:13:27.454 =================================================================================================================== 00:13:27.454 Total : 1997.20 124.83 99.16 0.00 30196.90 2730.67 31457.28 00:13:27.454 [2024-05-12 06:53:34.476734] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:27.454 [2024-05-12 06:53:34.476764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff7400 (9): Bad file descriptor 00:13:27.454 06:53:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.454 06:53:34 -- target/host_management.sh@87 -- # sleep 1 00:13:27.454 [2024-05-12 06:53:34.487996] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:28.388 06:53:35 -- target/host_management.sh@91 -- # kill -9 3003037 00:13:28.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3003037) - No such process 00:13:28.388 06:53:35 -- target/host_management.sh@91 -- # true 00:13:28.388 06:53:35 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:28.388 06:53:35 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:28.388 06:53:35 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:28.388 06:53:35 -- nvmf/common.sh@520 -- # config=() 00:13:28.388 06:53:35 -- nvmf/common.sh@520 -- # local subsystem config 00:13:28.388 06:53:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:28.388 06:53:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:28.388 { 00:13:28.388 "params": { 00:13:28.388 "name": "Nvme$subsystem", 00:13:28.388 "trtype": "$TEST_TRANSPORT", 00:13:28.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:28.388 "adrfam": "ipv4", 00:13:28.388 "trsvcid": "$NVMF_PORT", 00:13:28.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:28.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:28.388 "hdgst": ${hdgst:-false}, 00:13:28.388 "ddgst": ${ddgst:-false} 00:13:28.388 }, 00:13:28.388 "method": "bdev_nvme_attach_controller" 00:13:28.388 } 00:13:28.388 EOF 00:13:28.388 )") 00:13:28.388 06:53:35 -- nvmf/common.sh@542 -- # cat 00:13:28.388 06:53:35 -- nvmf/common.sh@544 -- # jq . 00:13:28.388 06:53:35 -- nvmf/common.sh@545 -- # IFS=, 00:13:28.388 06:53:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:28.388 "params": { 00:13:28.388 "name": "Nvme0", 00:13:28.388 "trtype": "tcp", 00:13:28.388 "traddr": "10.0.0.2", 00:13:28.388 "adrfam": "ipv4", 00:13:28.388 "trsvcid": "4420", 00:13:28.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:28.388 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:28.388 "hdgst": false, 00:13:28.388 "ddgst": false 00:13:28.388 }, 00:13:28.388 "method": "bdev_nvme_attach_controller" 00:13:28.388 }' 00:13:28.646 [2024-05-12 06:53:35.524350] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:28.646 [2024-05-12 06:53:35.524431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003315 ] 00:13:28.646 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.646 [2024-05-12 06:53:35.586161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.646 [2024-05-12 06:53:35.695815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.213 Running I/O for 1 seconds... 00:13:30.147 00:13:30.147 Latency(us) 00:13:30.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.147 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:30.147 Verification LBA range: start 0x0 length 0x400 00:13:30.147 Nvme0n1 : 1.01 2177.18 136.07 0.00 0.00 28983.67 2330.17 37282.70 00:13:30.147 =================================================================================================================== 00:13:30.147 Total : 2177.18 136.07 0.00 0.00 28983.67 2330.17 37282.70 00:13:30.406 06:53:37 -- target/host_management.sh@101 -- # stoptarget 00:13:30.406 06:53:37 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:30.406 06:53:37 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:30.406 06:53:37 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:30.406 06:53:37 -- target/host_management.sh@40 -- # nvmftestfini 00:13:30.406 06:53:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:30.406 06:53:37 -- nvmf/common.sh@116 -- # sync 00:13:30.406 06:53:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:30.406 06:53:37 -- nvmf/common.sh@119 -- # set +e 00:13:30.406 06:53:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:30.406 06:53:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:30.406 rmmod nvme_tcp 00:13:30.406 rmmod nvme_fabrics 00:13:30.406 rmmod nvme_keyring 00:13:30.406 06:53:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:30.406 06:53:37 -- nvmf/common.sh@123 -- # set -e 00:13:30.406 06:53:37 -- nvmf/common.sh@124 -- # return 0 00:13:30.406 06:53:37 -- nvmf/common.sh@477 -- # '[' -n 3002858 ']' 00:13:30.406 06:53:37 -- nvmf/common.sh@478 -- # killprocess 3002858 00:13:30.406 06:53:37 -- common/autotest_common.sh@926 -- # '[' -z 3002858 ']' 00:13:30.406 06:53:37 -- common/autotest_common.sh@930 -- # kill -0 3002858 00:13:30.406 06:53:37 -- common/autotest_common.sh@931 -- # uname 00:13:30.406 06:53:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:30.406 06:53:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3002858 00:13:30.406 06:53:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:30.406 06:53:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:30.406 06:53:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3002858' 00:13:30.406 killing process with pid 3002858 00:13:30.406 06:53:37 -- common/autotest_common.sh@945 -- # kill 3002858 00:13:30.406 06:53:37 -- common/autotest_common.sh@950 -- # wait 3002858 00:13:30.696 [2024-05-12 06:53:37.648800] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:30.696 06:53:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:30.696 06:53:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:30.696 06:53:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:30.696 06:53:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.696 06:53:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:30.696 06:53:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.696 06:53:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.696 06:53:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.603 06:53:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:32.603 00:13:32.603 real 0m7.471s 00:13:32.603 user 0m23.446s 00:13:32.603 sys 0m1.390s 00:13:32.603 06:53:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.603 06:53:39 -- common/autotest_common.sh@10 -- # set +x 00:13:32.603 ************************************ 00:13:32.603 END TEST nvmf_host_management 00:13:32.603 ************************************ 00:13:32.864 06:53:39 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:32.864 00:13:32.864 real 0m9.638s 00:13:32.864 user 0m24.222s 00:13:32.864 sys 0m2.810s 00:13:32.864 06:53:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.864 06:53:39 -- common/autotest_common.sh@10 -- # set +x 00:13:32.864 ************************************ 00:13:32.864 END TEST nvmf_host_management 00:13:32.864 ************************************ 00:13:32.864 06:53:39 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:32.864 06:53:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:32.865 06:53:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:32.865 06:53:39 -- common/autotest_common.sh@10 -- # set +x 00:13:32.865 ************************************ 00:13:32.865 START TEST nvmf_lvol 00:13:32.865 ************************************ 00:13:32.865 06:53:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:32.865 * Looking for test storage... 00:13:32.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.865 06:53:39 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.865 06:53:39 -- nvmf/common.sh@7 -- # uname -s 00:13:32.865 06:53:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.865 06:53:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.865 06:53:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.865 06:53:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.865 06:53:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.865 06:53:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.865 06:53:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.865 06:53:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.865 06:53:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.865 06:53:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.865 06:53:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.865 06:53:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.865 06:53:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.865 06:53:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.865 06:53:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.865 06:53:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.865 06:53:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.865 06:53:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.865 06:53:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.865 06:53:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.865 06:53:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.865 06:53:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.865 06:53:39 -- paths/export.sh@5 -- # export PATH 00:13:32.865 06:53:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.865 06:53:39 -- nvmf/common.sh@46 -- # : 0 00:13:32.865 06:53:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:32.865 06:53:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:32.865 06:53:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:32.865 06:53:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.865 06:53:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.865 06:53:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:32.865 06:53:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:32.865 06:53:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:32.865 06:53:39 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:32.865 06:53:39 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:32.865 06:53:39 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:32.865 06:53:39 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:32.865 06:53:39 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:32.865 06:53:39 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:32.865 06:53:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:32.865 06:53:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.865 06:53:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:32.865 06:53:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:32.865 06:53:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:32.865 06:53:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.865 06:53:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.865 06:53:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.865 06:53:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:32.865 06:53:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:32.865 06:53:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:32.865 06:53:39 -- common/autotest_common.sh@10 -- # set +x 00:13:34.766 06:53:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:34.766 06:53:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:34.766 06:53:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:34.766 06:53:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:34.766 06:53:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:34.766 06:53:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:34.766 06:53:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:34.766 06:53:41 -- nvmf/common.sh@294 -- # net_devs=() 00:13:34.767 06:53:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:34.767 06:53:41 -- nvmf/common.sh@295 -- # e810=() 00:13:34.767 06:53:41 -- nvmf/common.sh@295 -- # local -ga e810 00:13:34.767 06:53:41 -- nvmf/common.sh@296 -- # x722=() 00:13:34.767 06:53:41 -- nvmf/common.sh@296 -- # local -ga x722 00:13:34.767 06:53:41 -- nvmf/common.sh@297 -- # mlx=() 00:13:34.767 06:53:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:34.767 06:53:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.767 06:53:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.767 06:53:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.767 06:53:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.767 06:53:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.767 06:53:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.767 06:53:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.767 06:53:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.767 06:53:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.767 06:53:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.767 06:53:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.767 06:53:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:34.767 06:53:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:34.767 06:53:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:34.767 06:53:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:34.767 06:53:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:34.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:34.767 06:53:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:34.767 06:53:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:34.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:34.767 06:53:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:34.767 06:53:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:34.767 06:53:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.767 06:53:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:34.767 06:53:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.767 06:53:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:34.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:34.767 06:53:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.767 06:53:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:34.767 06:53:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.767 06:53:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:34.767 06:53:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.767 06:53:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:34.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:34.767 06:53:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.767 06:53:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:34.767 06:53:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:34.767 06:53:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:34.767 06:53:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.767 06:53:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.767 06:53:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.767 06:53:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:34.767 06:53:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.767 06:53:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.767 06:53:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:34.767 06:53:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.767 06:53:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.767 06:53:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:34.767 06:53:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:34.767 06:53:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.767 06:53:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.767 06:53:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.767 06:53:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.767 06:53:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:34.767 06:53:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.767 06:53:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.767 06:53:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.767 06:53:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:34.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:13:34.767 00:13:34.767 --- 10.0.0.2 ping statistics --- 00:13:34.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.767 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:13:34.767 06:53:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:13:34.767 00:13:34.767 --- 10.0.0.1 ping statistics --- 00:13:34.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.767 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:13:34.767 06:53:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.767 06:53:41 -- nvmf/common.sh@410 -- # return 0 00:13:34.767 06:53:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:34.767 06:53:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.767 06:53:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:34.767 06:53:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.767 06:53:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:34.767 06:53:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:34.767 06:53:41 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:34.767 06:53:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:34.767 06:53:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:34.767 06:53:41 -- common/autotest_common.sh@10 -- # set +x 00:13:34.767 06:53:41 -- nvmf/common.sh@469 -- # nvmfpid=3005551 00:13:34.767 06:53:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:34.767 06:53:41 -- nvmf/common.sh@470 -- # waitforlisten 3005551 00:13:34.767 06:53:41 -- common/autotest_common.sh@819 -- # '[' -z 3005551 ']' 00:13:34.767 06:53:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.767 06:53:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:34.767 06:53:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.767 06:53:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:34.767 06:53:41 -- common/autotest_common.sh@10 -- # set +x 00:13:35.026 [2024-05-12 06:53:41.930234] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:35.026 [2024-05-12 06:53:41.930317] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.026 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.026 [2024-05-12 06:53:41.993687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:35.026 [2024-05-12 06:53:42.101680] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:35.026 [2024-05-12 06:53:42.101838] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.026 [2024-05-12 06:53:42.101856] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.026 [2024-05-12 06:53:42.101869] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.026 [2024-05-12 06:53:42.101940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.026 [2024-05-12 06:53:42.102001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.026 [2024-05-12 06:53:42.102004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.957 06:53:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:35.957 06:53:42 -- common/autotest_common.sh@852 -- # return 0 00:13:35.957 06:53:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:35.957 06:53:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:35.957 06:53:42 -- common/autotest_common.sh@10 -- # set +x 00:13:35.957 06:53:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.957 06:53:42 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:36.214 [2024-05-12 06:53:43.197111] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.214 06:53:43 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:36.472 06:53:43 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:36.472 06:53:43 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:36.730 06:53:43 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:36.730 06:53:43 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:36.988 06:53:43 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:37.246 06:53:44 -- target/nvmf_lvol.sh@29 -- # lvs=007385ec-52b8-4145-b217-7965157417e5 00:13:37.246 06:53:44 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 007385ec-52b8-4145-b217-7965157417e5 lvol 20 00:13:37.503 06:53:44 -- target/nvmf_lvol.sh@32 -- # lvol=b119223f-bb6c-4bf9-99d3-58d50fc82f50 00:13:37.503 06:53:44 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:37.760 06:53:44 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b119223f-bb6c-4bf9-99d3-58d50fc82f50 00:13:38.017 06:53:44 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:38.274 [2024-05-12 06:53:45.184828] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.274 06:53:45 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:38.532 06:53:45 -- target/nvmf_lvol.sh@42 -- # perf_pid=3005997 00:13:38.532 06:53:45 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:38.532 06:53:45 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:38.532 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.464 06:53:46 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b119223f-bb6c-4bf9-99d3-58d50fc82f50 MY_SNAPSHOT 00:13:39.721 06:53:46 -- target/nvmf_lvol.sh@47 -- # snapshot=5b1e1fa0-cb06-4e09-a335-dda4e53f6350 00:13:39.721 06:53:46 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b119223f-bb6c-4bf9-99d3-58d50fc82f50 30 00:13:39.979 06:53:46 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5b1e1fa0-cb06-4e09-a335-dda4e53f6350 MY_CLONE 00:13:40.236 06:53:47 -- target/nvmf_lvol.sh@49 -- # clone=ee415e59-8363-42ba-88dd-54f5880f2748 00:13:40.236 06:53:47 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ee415e59-8363-42ba-88dd-54f5880f2748 00:13:40.802 06:53:47 -- target/nvmf_lvol.sh@53 -- # wait 3005997 00:13:48.910 Initializing NVMe Controllers 00:13:48.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:48.910 Controller IO queue size 128, less than required. 00:13:48.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:48.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:48.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:48.910 Initialization complete. Launching workers. 00:13:48.910 ======================================================== 00:13:48.910 Latency(us) 00:13:48.910 Device Information : IOPS MiB/s Average min max 00:13:48.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11643.15 45.48 10998.24 462.28 94528.97 00:13:48.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8737.93 34.13 14654.15 1628.87 68367.00 00:13:48.910 ======================================================== 00:13:48.910 Total : 20381.08 79.61 12565.63 462.28 94528.97 00:13:48.910 00:13:48.910 06:53:55 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:49.168 06:53:56 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b119223f-bb6c-4bf9-99d3-58d50fc82f50 00:13:49.426 06:53:56 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 007385ec-52b8-4145-b217-7965157417e5 00:13:49.684 06:53:56 -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:49.684 06:53:56 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:49.684 06:53:56 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:49.684 06:53:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:49.684 06:53:56 -- nvmf/common.sh@116 -- # sync 00:13:49.684 06:53:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:49.684 06:53:56 -- nvmf/common.sh@119 -- # set +e 00:13:49.684 06:53:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:49.684 06:53:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:49.684 rmmod nvme_tcp 00:13:49.684 rmmod nvme_fabrics 00:13:49.684 rmmod nvme_keyring 00:13:49.684 06:53:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:49.684 06:53:56 -- nvmf/common.sh@123 -- # set -e 00:13:49.684 06:53:56 -- nvmf/common.sh@124 -- # return 0 00:13:49.684 06:53:56 -- nvmf/common.sh@477 -- # '[' -n 3005551 ']' 00:13:49.684 06:53:56 -- nvmf/common.sh@478 -- # killprocess 3005551 00:13:49.684 06:53:56 -- common/autotest_common.sh@926 -- # '[' -z 3005551 ']' 00:13:49.684 06:53:56 -- common/autotest_common.sh@930 -- # kill -0 3005551 00:13:49.684 06:53:56 -- common/autotest_common.sh@931 -- # uname 00:13:49.684 06:53:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:49.684 06:53:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3005551 00:13:49.684 06:53:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:49.684 06:53:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:49.684 06:53:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3005551' 00:13:49.684 killing process with pid 3005551 00:13:49.684 06:53:56 -- common/autotest_common.sh@945 -- # kill 3005551 00:13:49.684 06:53:56 -- common/autotest_common.sh@950 -- # wait 3005551 00:13:49.948 06:53:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:49.948 06:53:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:49.948 06:53:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:49.948 06:53:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:49.948 06:53:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:49.948 06:53:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.948 06:53:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.948 06:53:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.519 06:53:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:52.519 00:13:52.519 real 0m19.298s 00:13:52.519 user 1m0.837s 00:13:52.519 sys 0m7.758s 00:13:52.519 06:53:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.519 06:53:59 -- common/autotest_common.sh@10 -- # set +x 00:13:52.519 ************************************ 00:13:52.519 END TEST nvmf_lvol 00:13:52.519 ************************************ 00:13:52.519 06:53:59 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:52.519 06:53:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:52.519 06:53:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.519 06:53:59 -- common/autotest_common.sh@10 -- # set +x 00:13:52.519 ************************************ 00:13:52.519 START TEST nvmf_lvs_grow 00:13:52.519 ************************************ 00:13:52.519 06:53:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:52.519 * Looking for test storage... 00:13:52.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.519 06:53:59 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.519 06:53:59 -- nvmf/common.sh@7 -- # uname -s 00:13:52.519 06:53:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.519 06:53:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.519 06:53:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.519 06:53:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.519 06:53:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.519 06:53:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.519 06:53:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.519 06:53:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.519 06:53:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.519 06:53:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.519 06:53:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:52.519 06:53:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:52.519 06:53:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.519 06:53:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.519 06:53:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.519 06:53:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.519 06:53:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.519 06:53:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.519 06:53:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.519 06:53:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.519 06:53:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.519 06:53:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.519 06:53:59 -- paths/export.sh@5 -- # export PATH 00:13:52.519 06:53:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.519 06:53:59 -- nvmf/common.sh@46 -- # : 0 00:13:52.519 06:53:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:52.519 06:53:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:52.519 06:53:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:52.519 06:53:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.519 06:53:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.519 06:53:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:52.519 06:53:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:52.519 06:53:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:52.519 06:53:59 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:52.519 06:53:59 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:52.519 06:53:59 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:13:52.519 06:53:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:52.519 06:53:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.519 06:53:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:52.519 06:53:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:52.519 06:53:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:52.519 06:53:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.519 06:53:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.519 06:53:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.519 06:53:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:52.519 06:53:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:52.519 06:53:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:52.519 06:53:59 -- common/autotest_common.sh@10 -- # set +x 00:13:53.895 06:54:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:53.895 06:54:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:53.895 06:54:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:53.895 06:54:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:53.895 06:54:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:53.895 06:54:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:53.895 06:54:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:53.895 06:54:00 -- nvmf/common.sh@294 -- # net_devs=() 00:13:53.895 06:54:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:53.895 06:54:00 -- nvmf/common.sh@295 -- # e810=() 00:13:53.895 06:54:00 -- nvmf/common.sh@295 -- # local -ga e810 00:13:53.895 06:54:00 -- nvmf/common.sh@296 -- # x722=() 00:13:53.895 06:54:00 -- nvmf/common.sh@296 -- # local -ga x722 00:13:53.895 06:54:00 -- nvmf/common.sh@297 -- # mlx=() 00:13:53.895 06:54:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:53.895 06:54:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.895 06:54:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.895 06:54:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.895 06:54:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.895 06:54:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.895 06:54:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.895 06:54:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.895 06:54:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.895 06:54:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.895 06:54:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.895 06:54:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.895 06:54:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:53.895 06:54:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:53.895 06:54:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:53.895 06:54:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:53.895 06:54:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:53.895 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:53.895 06:54:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:53.895 06:54:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:53.895 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:53.895 06:54:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:53.895 06:54:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:53.895 06:54:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:53.895 06:54:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.895 06:54:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:53.896 06:54:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.896 06:54:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:53.896 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:53.896 06:54:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.896 06:54:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:53.896 06:54:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.896 06:54:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:53.896 06:54:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.896 06:54:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:53.896 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:53.896 06:54:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.896 06:54:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:53.896 06:54:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:53.896 06:54:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:53.896 06:54:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:53.896 06:54:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:53.896 06:54:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.896 06:54:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.896 06:54:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.896 06:54:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:53.896 06:54:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.896 06:54:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.896 06:54:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:53.896 06:54:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.896 06:54:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.896 06:54:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:53.896 06:54:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:53.896 06:54:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.896 06:54:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.154 06:54:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.154 06:54:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.154 06:54:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:54.154 06:54:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.154 06:54:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.154 06:54:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.154 06:54:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:54.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:13:54.154 00:13:54.154 --- 10.0.0.2 ping statistics --- 00:13:54.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.154 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:13:54.154 06:54:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:13:54.154 00:13:54.154 --- 10.0.0.1 ping statistics --- 00:13:54.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.154 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:54.154 06:54:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.154 06:54:01 -- nvmf/common.sh@410 -- # return 0 00:13:54.154 06:54:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:54.154 06:54:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.154 06:54:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:54.154 06:54:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:54.154 06:54:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.154 06:54:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:54.154 06:54:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:54.154 06:54:01 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:13:54.154 06:54:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:54.154 06:54:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:54.154 06:54:01 -- common/autotest_common.sh@10 -- # set +x 00:13:54.154 06:54:01 -- nvmf/common.sh@469 -- # nvmfpid=3009374 00:13:54.154 06:54:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:54.154 06:54:01 -- nvmf/common.sh@470 -- # waitforlisten 3009374 00:13:54.154 06:54:01 -- common/autotest_common.sh@819 -- # '[' -z 3009374 ']' 00:13:54.154 06:54:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.154 06:54:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:54.154 06:54:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.154 06:54:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:54.154 06:54:01 -- common/autotest_common.sh@10 -- # set +x 00:13:54.154 [2024-05-12 06:54:01.168183] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:54.154 [2024-05-12 06:54:01.168258] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.154 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.154 [2024-05-12 06:54:01.232175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.413 [2024-05-12 06:54:01.341493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:54.413 [2024-05-12 06:54:01.341645] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.413 [2024-05-12 06:54:01.341661] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.413 [2024-05-12 06:54:01.341688] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.413 [2024-05-12 06:54:01.341733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.345 06:54:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:55.345 06:54:02 -- common/autotest_common.sh@852 -- # return 0 00:13:55.345 06:54:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:55.345 06:54:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:55.345 06:54:02 -- common/autotest_common.sh@10 -- # set +x 00:13:55.345 06:54:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.345 06:54:02 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:55.345 [2024-05-12 06:54:02.357872] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.345 06:54:02 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:13:55.345 06:54:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:55.345 06:54:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:55.345 06:54:02 -- common/autotest_common.sh@10 -- # set +x 00:13:55.345 ************************************ 00:13:55.345 START TEST lvs_grow_clean 00:13:55.345 ************************************ 00:13:55.345 06:54:02 -- common/autotest_common.sh@1104 -- # lvs_grow 00:13:55.345 06:54:02 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:55.345 06:54:02 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:55.345 06:54:02 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:55.345 06:54:02 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:55.345 06:54:02 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:55.345 06:54:02 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:55.345 06:54:02 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:55.345 06:54:02 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:55.345 06:54:02 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:55.602 06:54:02 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:55.602 06:54:02 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:55.860 06:54:02 -- target/nvmf_lvs_grow.sh@28 -- # lvs=63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 00:13:55.860 06:54:02 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 00:13:55.860 06:54:02 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:56.118 06:54:03 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:56.118 06:54:03 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:56.118 06:54:03 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 lvol 150 00:13:56.375 06:54:03 -- target/nvmf_lvs_grow.sh@33 -- # lvol=03184b84-efc1-4276-a159-d66fe3cbaaf5 00:13:56.375 06:54:03 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:56.375 06:54:03 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:56.633 [2024-05-12 06:54:03.615899] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:56.633 [2024-05-12 06:54:03.615972] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:56.633 true 00:13:56.633 06:54:03 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 00:13:56.633 06:54:03 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:56.891 06:54:03 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:56.891 06:54:03 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:57.149 06:54:04 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 03184b84-efc1-4276-a159-d66fe3cbaaf5 00:13:57.407 06:54:04 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:57.665 [2024-05-12 06:54:04.611093] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.665 06:54:04 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:57.924 06:54:04 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3009875 00:13:57.924 06:54:04 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:57.924 06:54:04 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:57.924 06:54:04 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3009875 /var/tmp/bdevperf.sock 00:13:57.924 06:54:04 -- common/autotest_common.sh@819 -- # '[' -z 3009875 ']' 00:13:57.924 06:54:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.924 06:54:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:57.924 06:54:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.924 06:54:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:57.924 06:54:04 -- common/autotest_common.sh@10 -- # set +x 00:13:57.924 [2024-05-12 06:54:04.912135] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:57.924 [2024-05-12 06:54:04.912229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3009875 ] 00:13:57.924 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.924 [2024-05-12 06:54:04.974545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.182 [2024-05-12 06:54:05.090102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.746 06:54:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:58.746 06:54:05 -- common/autotest_common.sh@852 -- # return 0 00:13:58.747 06:54:05 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:59.311 Nvme0n1 00:13:59.311 06:54:06 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:59.568 [ 00:13:59.568 { 00:13:59.568 "name": "Nvme0n1", 00:13:59.568 "aliases": [ 00:13:59.568 "03184b84-efc1-4276-a159-d66fe3cbaaf5" 00:13:59.568 ], 00:13:59.568 "product_name": "NVMe disk", 00:13:59.568 "block_size": 4096, 00:13:59.568 "num_blocks": 38912, 00:13:59.568 "uuid": "03184b84-efc1-4276-a159-d66fe3cbaaf5", 00:13:59.568 "assigned_rate_limits": { 00:13:59.568 "rw_ios_per_sec": 0, 00:13:59.568 "rw_mbytes_per_sec": 0, 00:13:59.568 "r_mbytes_per_sec": 0, 00:13:59.568 "w_mbytes_per_sec": 0 00:13:59.568 }, 00:13:59.568 "claimed": false, 00:13:59.568 "zoned": false, 00:13:59.568 "supported_io_types": { 00:13:59.568 "read": true, 00:13:59.568 "write": true, 00:13:59.568 "unmap": true, 00:13:59.568 "write_zeroes": true, 00:13:59.568 "flush": true, 00:13:59.568 "reset": true, 00:13:59.568 "compare": true, 00:13:59.568 "compare_and_write": true, 00:13:59.568 "abort": true, 00:13:59.568 "nvme_admin": true, 00:13:59.568 "nvme_io": true 00:13:59.568 }, 00:13:59.568 "driver_specific": { 00:13:59.568 "nvme": [ 00:13:59.568 { 00:13:59.568 "trid": { 00:13:59.568 "trtype": "TCP", 00:13:59.568 "adrfam": "IPv4", 00:13:59.568 "traddr": "10.0.0.2", 00:13:59.568 "trsvcid": "4420", 00:13:59.568 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:59.568 }, 00:13:59.568 "ctrlr_data": { 00:13:59.568 "cntlid": 1, 00:13:59.568 "vendor_id": "0x8086", 00:13:59.568 "model_number": "SPDK bdev Controller", 00:13:59.568 "serial_number": "SPDK0", 00:13:59.568 "firmware_revision": "24.01.1", 00:13:59.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:59.568 "oacs": { 00:13:59.568 "security": 0, 00:13:59.568 "format": 0, 00:13:59.568 "firmware": 0, 00:13:59.568 "ns_manage": 0 00:13:59.568 }, 00:13:59.568 "multi_ctrlr": true, 00:13:59.568 "ana_reporting": false 00:13:59.569 }, 00:13:59.569 "vs": { 00:13:59.569 "nvme_version": "1.3" 00:13:59.569 }, 00:13:59.569 "ns_data": { 00:13:59.569 "id": 1, 00:13:59.569 "can_share": true 00:13:59.569 } 00:13:59.569 } 00:13:59.569 ], 00:13:59.569 "mp_policy": "active_passive" 00:13:59.569 } 00:13:59.569 } 00:13:59.569 ] 00:13:59.569 06:54:06 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3010145 00:13:59.569 06:54:06 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:59.569 06:54:06 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:59.826 Running I/O for 10 seconds... 00:14:00.759 Latency(us) 00:14:00.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.759 Nvme0n1 : 1.00 14483.00 56.57 0.00 0.00 0.00 0.00 0.00 00:14:00.759 =================================================================================================================== 00:14:00.759 Total : 14483.00 56.57 0.00 0.00 0.00 0.00 0.00 00:14:00.759 00:14:01.694 06:54:08 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 00:14:01.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.694 Nvme0n1 : 2.00 14601.50 57.04 0.00 0.00 0.00 0.00 0.00 00:14:01.694 =================================================================================================================== 00:14:01.694 Total : 14601.50 57.04 0.00 0.00 0.00 0.00 0.00 00:14:01.694 00:14:01.953 true 00:14:01.953 06:54:08 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 00:14:01.953 06:54:08 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:02.211 06:54:09 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:02.211 06:54:09 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:02.211 06:54:09 -- target/nvmf_lvs_grow.sh@65 -- # wait 3010145 00:14:02.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.778 Nvme0n1 : 3.00 14678.00 57.34 0.00 0.00 0.00 0.00 0.00 00:14:02.778 =================================================================================================================== 00:14:02.778 Total : 14678.00 57.34 0.00 0.00 0.00 0.00 0.00 00:14:02.778 00:14:03.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.713 Nvme0n1 : 4.00 14738.00 57.57 0.00 0.00 0.00 0.00 0.00 00:14:03.713 =================================================================================================================== 00:14:03.713 Total : 14738.00 57.57 0.00 0.00 0.00 0.00 0.00 00:14:03.713 00:14:04.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:04.650 Nvme0n1 : 5.00 14797.20 57.80 0.00 0.00 0.00 0.00 0.00 00:14:04.650 =================================================================================================================== 00:14:04.650 Total : 14797.20 57.80 0.00 0.00 0.00 0.00 0.00 00:14:04.650 00:14:05.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.637 Nvme0n1 : 6.00 14848.33 58.00 0.00 0.00 0.00 0.00 0.00 00:14:05.637 =================================================================================================================== 00:14:05.637 Total : 14848.33 58.00 0.00 0.00 0.00 0.00 0.00 00:14:05.637 00:14:07.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.009 Nvme0n1 : 7.00 14884.86 58.14 0.00 0.00 0.00 0.00 0.00 00:14:07.009 =================================================================================================================== 00:14:07.009 Total : 14884.86 58.14 0.00 0.00 0.00 0.00 0.00 00:14:07.009 00:14:07.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.944 Nvme0n1 : 8.00 14921.00 58.29 0.00 0.00 0.00 0.00 0.00 00:14:07.944 =================================================================================================================== 00:14:07.944 Total : 14921.00 58.29 0.00 0.00 0.00 0.00 0.00 00:14:07.944 00:14:08.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.879 Nvme0n1 : 9.00 14947.78 58.39 0.00 0.00 0.00 0.00 0.00 00:14:08.879 =================================================================================================================== 00:14:08.879 Total : 14947.78 58.39 0.00 0.00 0.00 0.00 0.00 00:14:08.879 00:14:09.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.813 Nvme0n1 : 10.00 15008.80 58.63 0.00 0.00 0.00 0.00 0.00 00:14:09.813 =================================================================================================================== 00:14:09.813 Total : 15008.80 58.63 0.00 0.00 0.00 0.00 0.00 00:14:09.813 00:14:09.813 00:14:09.813 Latency(us) 00:14:09.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.813 Nvme0n1 : 10.01 15009.63 58.63 0.00 0.00 8522.02 3422.44 13398.47 00:14:09.813 =================================================================================================================== 00:14:09.813 Total : 15009.63 58.63 0.00 0.00 8522.02 3422.44 13398.47 00:14:09.813 0 00:14:09.813 06:54:16 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3009875 00:14:09.813 06:54:16 -- common/autotest_common.sh@926 -- # '[' -z 3009875 ']' 00:14:09.813 06:54:16 -- common/autotest_common.sh@930 -- # kill -0 3009875 00:14:09.813 06:54:16 -- common/autotest_common.sh@931 -- # uname 00:14:09.813 06:54:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:09.813 06:54:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3009875 00:14:09.813 06:54:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:09.813 06:54:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:09.813 06:54:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3009875' 00:14:09.813 killing process with pid 3009875 00:14:09.813 06:54:16 -- common/autotest_common.sh@945 -- # kill 3009875 00:14:09.813 Received shutdown signal, test time was about 10.000000 seconds 00:14:09.813 00:14:09.813 Latency(us) 00:14:09.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.813 =================================================================================================================== 00:14:09.813 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:09.813 06:54:16 -- common/autotest_common.sh@950 -- # wait 3009875 00:14:10.072 06:54:17 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:10.330 06:54:17 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 00:14:10.330 06:54:17 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:10.588 06:54:17 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:10.588 06:54:17 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:10.588 06:54:17 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:10.846 [2024-05-12 06:54:17.852486] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:10.846 06:54:17 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 00:14:10.846 06:54:17 -- common/autotest_common.sh@640 -- # local es=0 00:14:10.846 06:54:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 00:14:10.846 06:54:17 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.846 06:54:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:10.846 06:54:17 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.846 06:54:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:10.846 06:54:17 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.846 06:54:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:10.846 06:54:17 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.846 06:54:17 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:10.846 06:54:17 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 00:14:11.104 request: 00:14:11.104 { 00:14:11.104 "uuid": "63145a6c-3f82-4dec-8691-0d9a3a8bcaf7", 00:14:11.104 "method": "bdev_lvol_get_lvstores", 00:14:11.104 "req_id": 1 00:14:11.104 } 00:14:11.104 Got JSON-RPC error response 00:14:11.104 response: 00:14:11.104 { 00:14:11.104 "code": -19, 00:14:11.104 "message": "No such device" 00:14:11.104 } 00:14:11.104 06:54:18 -- common/autotest_common.sh@643 -- # es=1 00:14:11.104 06:54:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:11.104 06:54:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:11.104 06:54:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:11.104 06:54:18 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:11.363 aio_bdev 00:14:11.363 06:54:18 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 03184b84-efc1-4276-a159-d66fe3cbaaf5 00:14:11.363 06:54:18 -- common/autotest_common.sh@887 -- # local bdev_name=03184b84-efc1-4276-a159-d66fe3cbaaf5 00:14:11.363 06:54:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:11.363 06:54:18 -- common/autotest_common.sh@889 -- # local i 00:14:11.363 06:54:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:11.363 06:54:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:11.363 06:54:18 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:11.621 06:54:18 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 03184b84-efc1-4276-a159-d66fe3cbaaf5 -t 2000 00:14:11.880 [ 00:14:11.880 { 00:14:11.880 "name": "03184b84-efc1-4276-a159-d66fe3cbaaf5", 00:14:11.880 "aliases": [ 00:14:11.880 "lvs/lvol" 00:14:11.880 ], 00:14:11.880 "product_name": "Logical Volume", 00:14:11.880 "block_size": 4096, 00:14:11.880 "num_blocks": 38912, 00:14:11.880 "uuid": "03184b84-efc1-4276-a159-d66fe3cbaaf5", 00:14:11.880 "assigned_rate_limits": { 00:14:11.880 "rw_ios_per_sec": 0, 00:14:11.880 "rw_mbytes_per_sec": 0, 00:14:11.880 "r_mbytes_per_sec": 0, 00:14:11.880 "w_mbytes_per_sec": 0 00:14:11.880 }, 00:14:11.880 "claimed": false, 00:14:11.880 "zoned": false, 00:14:11.880 "supported_io_types": { 00:14:11.880 "read": true, 00:14:11.880 "write": true, 00:14:11.880 "unmap": true, 00:14:11.880 "write_zeroes": true, 00:14:11.880 "flush": false, 00:14:11.880 "reset": true, 00:14:11.880 "compare": false, 00:14:11.880 "compare_and_write": false, 00:14:11.880 "abort": false, 00:14:11.880 "nvme_admin": false, 00:14:11.880 "nvme_io": false 00:14:11.880 }, 00:14:11.880 "driver_specific": { 00:14:11.880 "lvol": { 00:14:11.880 "lvol_store_uuid": "63145a6c-3f82-4dec-8691-0d9a3a8bcaf7", 00:14:11.880 "base_bdev": "aio_bdev", 00:14:11.880 "thin_provision": false, 00:14:11.880 "snapshot": false, 00:14:11.880 "clone": false, 00:14:11.880 "esnap_clone": false 00:14:11.880 } 00:14:11.880 } 00:14:11.880 } 00:14:11.880 ] 00:14:11.880 06:54:18 -- common/autotest_common.sh@895 -- # return 0 00:14:11.880 06:54:18 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 00:14:11.880 06:54:18 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:12.138 06:54:19 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:12.138 06:54:19 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 00:14:12.138 06:54:19 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:12.396 06:54:19 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:12.396 06:54:19 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 03184b84-efc1-4276-a159-d66fe3cbaaf5 00:14:12.654 06:54:19 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 63145a6c-3f82-4dec-8691-0d9a3a8bcaf7 00:14:12.913 06:54:19 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:13.171 06:54:20 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:13.171 00:14:13.171 real 0m17.711s 00:14:13.171 user 0m17.512s 00:14:13.171 sys 0m1.772s 00:14:13.171 06:54:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.171 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:14:13.171 ************************************ 00:14:13.171 END TEST lvs_grow_clean 00:14:13.171 ************************************ 00:14:13.171 06:54:20 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:13.171 06:54:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:13.171 06:54:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:13.171 06:54:20 -- common/autotest_common.sh@10 -- # set +x 00:14:13.171 ************************************ 00:14:13.171 START TEST lvs_grow_dirty 00:14:13.171 ************************************ 00:14:13.171 06:54:20 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:13.171 06:54:20 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:13.171 06:54:20 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:13.171 06:54:20 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:13.171 06:54:20 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:13.171 06:54:20 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:13.171 06:54:20 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:13.171 06:54:20 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:13.171 06:54:20 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:13.171 06:54:20 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:13.430 06:54:20 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:13.430 06:54:20 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:13.689 06:54:20 -- target/nvmf_lvs_grow.sh@28 -- # lvs=58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:13.689 06:54:20 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:13.689 06:54:20 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:13.947 06:54:20 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:13.947 06:54:20 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:13.947 06:54:20 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 lvol 150 00:14:14.205 06:54:21 -- target/nvmf_lvs_grow.sh@33 -- # lvol=cf01597d-2db8-4608-84c9-15f69f0f292b 00:14:14.205 06:54:21 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:14.205 06:54:21 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:14.464 [2024-05-12 06:54:21.467179] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:14.464 [2024-05-12 06:54:21.467256] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:14.464 true 00:14:14.464 06:54:21 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:14.464 06:54:21 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:14.724 06:54:21 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:14.724 06:54:21 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:14.985 06:54:21 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cf01597d-2db8-4608-84c9-15f69f0f292b 00:14:15.243 06:54:22 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:15.501 06:54:22 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:15.759 06:54:22 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3012618 00:14:15.759 06:54:22 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:15.759 06:54:22 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3012618 /var/tmp/bdevperf.sock 00:14:15.759 06:54:22 -- common/autotest_common.sh@819 -- # '[' -z 3012618 ']' 00:14:15.759 06:54:22 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:15.759 06:54:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:15.759 06:54:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:15.759 06:54:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:15.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:15.759 06:54:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:15.759 06:54:22 -- common/autotest_common.sh@10 -- # set +x 00:14:15.759 [2024-05-12 06:54:22.788715] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:15.759 [2024-05-12 06:54:22.788790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3012618 ] 00:14:15.759 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.759 [2024-05-12 06:54:22.850825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.017 [2024-05-12 06:54:22.964123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.953 06:54:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:16.953 06:54:23 -- common/autotest_common.sh@852 -- # return 0 00:14:16.953 06:54:23 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:16.953 Nvme0n1 00:14:16.953 06:54:24 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:17.211 [ 00:14:17.211 { 00:14:17.211 "name": "Nvme0n1", 00:14:17.211 "aliases": [ 00:14:17.211 "cf01597d-2db8-4608-84c9-15f69f0f292b" 00:14:17.211 ], 00:14:17.211 "product_name": "NVMe disk", 00:14:17.211 "block_size": 4096, 00:14:17.211 "num_blocks": 38912, 00:14:17.211 "uuid": "cf01597d-2db8-4608-84c9-15f69f0f292b", 00:14:17.211 "assigned_rate_limits": { 00:14:17.211 "rw_ios_per_sec": 0, 00:14:17.211 "rw_mbytes_per_sec": 0, 00:14:17.211 "r_mbytes_per_sec": 0, 00:14:17.211 "w_mbytes_per_sec": 0 00:14:17.211 }, 00:14:17.211 "claimed": false, 00:14:17.211 "zoned": false, 00:14:17.211 "supported_io_types": { 00:14:17.211 "read": true, 00:14:17.211 "write": true, 00:14:17.211 "unmap": true, 00:14:17.211 "write_zeroes": true, 00:14:17.211 "flush": true, 00:14:17.211 "reset": true, 00:14:17.211 "compare": true, 00:14:17.211 "compare_and_write": true, 00:14:17.211 "abort": true, 00:14:17.211 "nvme_admin": true, 00:14:17.211 "nvme_io": true 00:14:17.211 }, 00:14:17.211 "driver_specific": { 00:14:17.211 "nvme": [ 00:14:17.211 { 00:14:17.211 "trid": { 00:14:17.211 "trtype": "TCP", 00:14:17.211 "adrfam": "IPv4", 00:14:17.211 "traddr": "10.0.0.2", 00:14:17.211 "trsvcid": "4420", 00:14:17.211 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:17.211 }, 00:14:17.211 "ctrlr_data": { 00:14:17.211 "cntlid": 1, 00:14:17.211 "vendor_id": "0x8086", 00:14:17.211 "model_number": "SPDK bdev Controller", 00:14:17.211 "serial_number": "SPDK0", 00:14:17.211 "firmware_revision": "24.01.1", 00:14:17.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:17.211 "oacs": { 00:14:17.211 "security": 0, 00:14:17.211 "format": 0, 00:14:17.211 "firmware": 0, 00:14:17.211 "ns_manage": 0 00:14:17.211 }, 00:14:17.211 "multi_ctrlr": true, 00:14:17.211 "ana_reporting": false 00:14:17.211 }, 00:14:17.211 "vs": { 00:14:17.211 "nvme_version": "1.3" 00:14:17.211 }, 00:14:17.211 "ns_data": { 00:14:17.211 "id": 1, 00:14:17.211 "can_share": true 00:14:17.211 } 00:14:17.211 } 00:14:17.211 ], 00:14:17.211 "mp_policy": "active_passive" 00:14:17.211 } 00:14:17.211 } 00:14:17.211 ] 00:14:17.211 06:54:24 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3012765 00:14:17.211 06:54:24 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:17.211 06:54:24 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:17.470 Running I/O for 10 seconds... 00:14:18.409 Latency(us) 00:14:18.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.409 Nvme0n1 : 1.00 13547.00 52.92 0.00 0.00 0.00 0.00 0.00 00:14:18.409 =================================================================================================================== 00:14:18.409 Total : 13547.00 52.92 0.00 0.00 0.00 0.00 0.00 00:14:18.409 00:14:19.375 06:54:26 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:19.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.375 Nvme0n1 : 2.00 13709.50 53.55 0.00 0.00 0.00 0.00 0.00 00:14:19.375 =================================================================================================================== 00:14:19.375 Total : 13709.50 53.55 0.00 0.00 0.00 0.00 0.00 00:14:19.375 00:14:19.632 true 00:14:19.632 06:54:26 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:19.632 06:54:26 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:19.890 06:54:26 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:19.890 06:54:26 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:19.890 06:54:26 -- target/nvmf_lvs_grow.sh@65 -- # wait 3012765 00:14:20.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.459 Nvme0n1 : 3.00 13902.33 54.31 0.00 0.00 0.00 0.00 0.00 00:14:20.459 =================================================================================================================== 00:14:20.459 Total : 13902.33 54.31 0.00 0.00 0.00 0.00 0.00 00:14:20.459 00:14:21.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.392 Nvme0n1 : 4.00 13954.75 54.51 0.00 0.00 0.00 0.00 0.00 00:14:21.392 =================================================================================================================== 00:14:21.392 Total : 13954.75 54.51 0.00 0.00 0.00 0.00 0.00 00:14:21.392 00:14:22.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.331 Nvme0n1 : 5.00 14002.20 54.70 0.00 0.00 0.00 0.00 0.00 00:14:22.331 =================================================================================================================== 00:14:22.331 Total : 14002.20 54.70 0.00 0.00 0.00 0.00 0.00 00:14:22.331 00:14:23.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.710 Nvme0n1 : 6.00 14041.83 54.85 0.00 0.00 0.00 0.00 0.00 00:14:23.710 =================================================================================================================== 00:14:23.710 Total : 14041.83 54.85 0.00 0.00 0.00 0.00 0.00 00:14:23.710 00:14:24.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.649 Nvme0n1 : 7.00 14080.43 55.00 0.00 0.00 0.00 0.00 0.00 00:14:24.649 =================================================================================================================== 00:14:24.649 Total : 14080.43 55.00 0.00 0.00 0.00 0.00 0.00 00:14:24.649 00:14:25.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.589 Nvme0n1 : 8.00 14109.38 55.11 0.00 0.00 0.00 0.00 0.00 00:14:25.589 =================================================================================================================== 00:14:25.589 Total : 14109.38 55.11 0.00 0.00 0.00 0.00 0.00 00:14:25.589 00:14:26.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.527 Nvme0n1 : 9.00 14169.22 55.35 0.00 0.00 0.00 0.00 0.00 00:14:26.527 =================================================================================================================== 00:14:26.527 Total : 14169.22 55.35 0.00 0.00 0.00 0.00 0.00 00:14:26.527 00:14:27.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.461 Nvme0n1 : 10.00 14191.50 55.44 0.00 0.00 0.00 0.00 0.00 00:14:27.461 =================================================================================================================== 00:14:27.461 Total : 14191.50 55.44 0.00 0.00 0.00 0.00 0.00 00:14:27.461 00:14:27.461 00:14:27.462 Latency(us) 00:14:27.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.462 Nvme0n1 : 10.01 14191.99 55.44 0.00 0.00 9011.35 7184.69 18835.53 00:14:27.462 =================================================================================================================== 00:14:27.462 Total : 14191.99 55.44 0.00 0.00 9011.35 7184.69 18835.53 00:14:27.462 0 00:14:27.462 06:54:34 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3012618 00:14:27.462 06:54:34 -- common/autotest_common.sh@926 -- # '[' -z 3012618 ']' 00:14:27.462 06:54:34 -- common/autotest_common.sh@930 -- # kill -0 3012618 00:14:27.462 06:54:34 -- common/autotest_common.sh@931 -- # uname 00:14:27.462 06:54:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:27.462 06:54:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3012618 00:14:27.462 06:54:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:27.462 06:54:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:27.462 06:54:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3012618' 00:14:27.462 killing process with pid 3012618 00:14:27.462 06:54:34 -- common/autotest_common.sh@945 -- # kill 3012618 00:14:27.462 Received shutdown signal, test time was about 10.000000 seconds 00:14:27.462 00:14:27.462 Latency(us) 00:14:27.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.462 =================================================================================================================== 00:14:27.462 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:27.462 06:54:34 -- common/autotest_common.sh@950 -- # wait 3012618 00:14:27.720 06:54:34 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:27.978 06:54:35 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:27.978 06:54:35 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:28.238 06:54:35 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:28.238 06:54:35 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:28.238 06:54:35 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3009374 00:14:28.238 06:54:35 -- target/nvmf_lvs_grow.sh@74 -- # wait 3009374 00:14:28.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3009374 Killed "${NVMF_APP[@]}" "$@" 00:14:28.238 06:54:35 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:28.238 06:54:35 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:28.238 06:54:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:28.238 06:54:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:28.238 06:54:35 -- common/autotest_common.sh@10 -- # set +x 00:14:28.238 06:54:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:28.238 06:54:35 -- nvmf/common.sh@469 -- # nvmfpid=3014128 00:14:28.238 06:54:35 -- nvmf/common.sh@470 -- # waitforlisten 3014128 00:14:28.238 06:54:35 -- common/autotest_common.sh@819 -- # '[' -z 3014128 ']' 00:14:28.238 06:54:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.238 06:54:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:28.238 06:54:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.238 06:54:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:28.238 06:54:35 -- common/autotest_common.sh@10 -- # set +x 00:14:28.238 [2024-05-12 06:54:35.343340] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:28.238 [2024-05-12 06:54:35.343424] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.498 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.498 [2024-05-12 06:54:35.416047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.498 [2024-05-12 06:54:35.533684] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:28.498 [2024-05-12 06:54:35.533869] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.498 [2024-05-12 06:54:35.533886] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.498 [2024-05-12 06:54:35.533898] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.498 [2024-05-12 06:54:35.533931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.437 06:54:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:29.437 06:54:36 -- common/autotest_common.sh@852 -- # return 0 00:14:29.437 06:54:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:29.437 06:54:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:29.437 06:54:36 -- common/autotest_common.sh@10 -- # set +x 00:14:29.437 06:54:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.437 06:54:36 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:29.696 [2024-05-12 06:54:36.611254] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:29.696 [2024-05-12 06:54:36.611385] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:29.696 [2024-05-12 06:54:36.611443] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:29.696 06:54:36 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:29.696 06:54:36 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev cf01597d-2db8-4608-84c9-15f69f0f292b 00:14:29.696 06:54:36 -- common/autotest_common.sh@887 -- # local bdev_name=cf01597d-2db8-4608-84c9-15f69f0f292b 00:14:29.696 06:54:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:29.696 06:54:36 -- common/autotest_common.sh@889 -- # local i 00:14:29.696 06:54:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:29.696 06:54:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:29.696 06:54:36 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:29.954 06:54:36 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cf01597d-2db8-4608-84c9-15f69f0f292b -t 2000 00:14:30.213 [ 00:14:30.213 { 00:14:30.213 "name": "cf01597d-2db8-4608-84c9-15f69f0f292b", 00:14:30.213 "aliases": [ 00:14:30.213 "lvs/lvol" 00:14:30.213 ], 00:14:30.213 "product_name": "Logical Volume", 00:14:30.213 "block_size": 4096, 00:14:30.213 "num_blocks": 38912, 00:14:30.213 "uuid": "cf01597d-2db8-4608-84c9-15f69f0f292b", 00:14:30.213 "assigned_rate_limits": { 00:14:30.213 "rw_ios_per_sec": 0, 00:14:30.213 "rw_mbytes_per_sec": 0, 00:14:30.213 "r_mbytes_per_sec": 0, 00:14:30.213 "w_mbytes_per_sec": 0 00:14:30.213 }, 00:14:30.213 "claimed": false, 00:14:30.213 "zoned": false, 00:14:30.213 "supported_io_types": { 00:14:30.213 "read": true, 00:14:30.213 "write": true, 00:14:30.213 "unmap": true, 00:14:30.213 "write_zeroes": true, 00:14:30.213 "flush": false, 00:14:30.213 "reset": true, 00:14:30.213 "compare": false, 00:14:30.213 "compare_and_write": false, 00:14:30.213 "abort": false, 00:14:30.213 "nvme_admin": false, 00:14:30.213 "nvme_io": false 00:14:30.213 }, 00:14:30.213 "driver_specific": { 00:14:30.213 "lvol": { 00:14:30.213 "lvol_store_uuid": "58a32d9a-05b5-4c4b-93ae-d77e267bd341", 00:14:30.213 "base_bdev": "aio_bdev", 00:14:30.213 "thin_provision": false, 00:14:30.213 "snapshot": false, 00:14:30.213 "clone": false, 00:14:30.213 "esnap_clone": false 00:14:30.213 } 00:14:30.213 } 00:14:30.213 } 00:14:30.213 ] 00:14:30.213 06:54:37 -- common/autotest_common.sh@895 -- # return 0 00:14:30.213 06:54:37 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:30.213 06:54:37 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:30.473 06:54:37 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:30.473 06:54:37 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:30.473 06:54:37 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:30.473 06:54:37 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:30.473 06:54:37 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:30.733 [2024-05-12 06:54:37.812078] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:30.733 06:54:37 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:30.733 06:54:37 -- common/autotest_common.sh@640 -- # local es=0 00:14:30.733 06:54:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:30.733 06:54:37 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.733 06:54:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:30.733 06:54:37 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.733 06:54:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:30.733 06:54:37 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.733 06:54:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:30.733 06:54:37 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.733 06:54:37 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:30.733 06:54:37 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:30.991 request: 00:14:30.991 { 00:14:30.991 "uuid": "58a32d9a-05b5-4c4b-93ae-d77e267bd341", 00:14:30.991 "method": "bdev_lvol_get_lvstores", 00:14:30.991 "req_id": 1 00:14:30.991 } 00:14:30.991 Got JSON-RPC error response 00:14:30.991 response: 00:14:30.991 { 00:14:30.991 "code": -19, 00:14:30.991 "message": "No such device" 00:14:30.991 } 00:14:30.991 06:54:38 -- common/autotest_common.sh@643 -- # es=1 00:14:30.991 06:54:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:30.991 06:54:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:30.991 06:54:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:30.991 06:54:38 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:31.251 aio_bdev 00:14:31.251 06:54:38 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev cf01597d-2db8-4608-84c9-15f69f0f292b 00:14:31.251 06:54:38 -- common/autotest_common.sh@887 -- # local bdev_name=cf01597d-2db8-4608-84c9-15f69f0f292b 00:14:31.251 06:54:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:31.251 06:54:38 -- common/autotest_common.sh@889 -- # local i 00:14:31.251 06:54:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:31.251 06:54:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:31.251 06:54:38 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:31.510 06:54:38 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cf01597d-2db8-4608-84c9-15f69f0f292b -t 2000 00:14:31.768 [ 00:14:31.768 { 00:14:31.768 "name": "cf01597d-2db8-4608-84c9-15f69f0f292b", 00:14:31.768 "aliases": [ 00:14:31.768 "lvs/lvol" 00:14:31.768 ], 00:14:31.768 "product_name": "Logical Volume", 00:14:31.768 "block_size": 4096, 00:14:31.768 "num_blocks": 38912, 00:14:31.768 "uuid": "cf01597d-2db8-4608-84c9-15f69f0f292b", 00:14:31.768 "assigned_rate_limits": { 00:14:31.768 "rw_ios_per_sec": 0, 00:14:31.768 "rw_mbytes_per_sec": 0, 00:14:31.768 "r_mbytes_per_sec": 0, 00:14:31.768 "w_mbytes_per_sec": 0 00:14:31.768 }, 00:14:31.768 "claimed": false, 00:14:31.768 "zoned": false, 00:14:31.768 "supported_io_types": { 00:14:31.768 "read": true, 00:14:31.768 "write": true, 00:14:31.768 "unmap": true, 00:14:31.768 "write_zeroes": true, 00:14:31.768 "flush": false, 00:14:31.768 "reset": true, 00:14:31.768 "compare": false, 00:14:31.768 "compare_and_write": false, 00:14:31.768 "abort": false, 00:14:31.768 "nvme_admin": false, 00:14:31.768 "nvme_io": false 00:14:31.768 }, 00:14:31.768 "driver_specific": { 00:14:31.768 "lvol": { 00:14:31.768 "lvol_store_uuid": "58a32d9a-05b5-4c4b-93ae-d77e267bd341", 00:14:31.768 "base_bdev": "aio_bdev", 00:14:31.768 "thin_provision": false, 00:14:31.768 "snapshot": false, 00:14:31.768 "clone": false, 00:14:31.768 "esnap_clone": false 00:14:31.768 } 00:14:31.768 } 00:14:31.768 } 00:14:31.768 ] 00:14:31.768 06:54:38 -- common/autotest_common.sh@895 -- # return 0 00:14:31.768 06:54:38 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:31.768 06:54:38 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:32.026 06:54:39 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:32.026 06:54:39 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:32.026 06:54:39 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:32.284 06:54:39 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:32.284 06:54:39 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cf01597d-2db8-4608-84c9-15f69f0f292b 00:14:32.543 06:54:39 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 58a32d9a-05b5-4c4b-93ae-d77e267bd341 00:14:32.804 06:54:39 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:33.068 06:54:40 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:33.068 00:14:33.068 real 0m19.973s 00:14:33.068 user 0m49.869s 00:14:33.068 sys 0m4.987s 00:14:33.068 06:54:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.068 06:54:40 -- common/autotest_common.sh@10 -- # set +x 00:14:33.068 ************************************ 00:14:33.068 END TEST lvs_grow_dirty 00:14:33.068 ************************************ 00:14:33.068 06:54:40 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:33.068 06:54:40 -- common/autotest_common.sh@796 -- # type=--id 00:14:33.068 06:54:40 -- common/autotest_common.sh@797 -- # id=0 00:14:33.068 06:54:40 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:14:33.068 06:54:40 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:33.068 06:54:40 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:14:33.068 06:54:40 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:14:33.068 06:54:40 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:14:33.068 06:54:40 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:33.068 nvmf_trace.0 00:14:33.068 06:54:40 -- common/autotest_common.sh@811 -- # return 0 00:14:33.068 06:54:40 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:33.068 06:54:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:33.068 06:54:40 -- nvmf/common.sh@116 -- # sync 00:14:33.068 06:54:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:33.068 06:54:40 -- nvmf/common.sh@119 -- # set +e 00:14:33.068 06:54:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:33.068 06:54:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:33.068 rmmod nvme_tcp 00:14:33.068 rmmod nvme_fabrics 00:14:33.068 rmmod nvme_keyring 00:14:33.068 06:54:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:33.068 06:54:40 -- nvmf/common.sh@123 -- # set -e 00:14:33.068 06:54:40 -- nvmf/common.sh@124 -- # return 0 00:14:33.068 06:54:40 -- nvmf/common.sh@477 -- # '[' -n 3014128 ']' 00:14:33.068 06:54:40 -- nvmf/common.sh@478 -- # killprocess 3014128 00:14:33.068 06:54:40 -- common/autotest_common.sh@926 -- # '[' -z 3014128 ']' 00:14:33.068 06:54:40 -- common/autotest_common.sh@930 -- # kill -0 3014128 00:14:33.068 06:54:40 -- common/autotest_common.sh@931 -- # uname 00:14:33.068 06:54:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:33.068 06:54:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3014128 00:14:33.375 06:54:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:33.375 06:54:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:33.375 06:54:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3014128' 00:14:33.375 killing process with pid 3014128 00:14:33.375 06:54:40 -- common/autotest_common.sh@945 -- # kill 3014128 00:14:33.375 06:54:40 -- common/autotest_common.sh@950 -- # wait 3014128 00:14:33.375 06:54:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:33.375 06:54:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:33.375 06:54:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:33.375 06:54:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.375 06:54:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:33.375 06:54:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.375 06:54:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.375 06:54:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.917 06:54:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:35.917 00:14:35.917 real 0m43.418s 00:14:35.917 user 1m13.700s 00:14:35.917 sys 0m8.502s 00:14:35.917 06:54:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.917 06:54:42 -- common/autotest_common.sh@10 -- # set +x 00:14:35.917 ************************************ 00:14:35.917 END TEST nvmf_lvs_grow 00:14:35.917 ************************************ 00:14:35.917 06:54:42 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:35.917 06:54:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:35.917 06:54:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:35.917 06:54:42 -- common/autotest_common.sh@10 -- # set +x 00:14:35.917 ************************************ 00:14:35.917 START TEST nvmf_bdev_io_wait 00:14:35.917 ************************************ 00:14:35.917 06:54:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:35.917 * Looking for test storage... 00:14:35.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.917 06:54:42 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.917 06:54:42 -- nvmf/common.sh@7 -- # uname -s 00:14:35.917 06:54:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.917 06:54:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.917 06:54:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.917 06:54:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.917 06:54:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.917 06:54:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.917 06:54:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.917 06:54:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.917 06:54:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.917 06:54:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.917 06:54:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.917 06:54:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.917 06:54:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.917 06:54:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.917 06:54:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.917 06:54:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.917 06:54:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.917 06:54:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.917 06:54:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.917 06:54:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.917 06:54:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.917 06:54:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.917 06:54:42 -- paths/export.sh@5 -- # export PATH 00:14:35.918 06:54:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.918 06:54:42 -- nvmf/common.sh@46 -- # : 0 00:14:35.918 06:54:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:35.918 06:54:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:35.918 06:54:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:35.918 06:54:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.918 06:54:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.918 06:54:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:35.918 06:54:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:35.918 06:54:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:35.918 06:54:42 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.918 06:54:42 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.918 06:54:42 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:35.918 06:54:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:35.918 06:54:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.918 06:54:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:35.918 06:54:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:35.918 06:54:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:35.918 06:54:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.918 06:54:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.918 06:54:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.918 06:54:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:35.918 06:54:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:35.918 06:54:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:35.918 06:54:42 -- common/autotest_common.sh@10 -- # set +x 00:14:37.820 06:54:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:37.820 06:54:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:37.820 06:54:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:37.820 06:54:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:37.820 06:54:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:37.820 06:54:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:37.820 06:54:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:37.820 06:54:44 -- nvmf/common.sh@294 -- # net_devs=() 00:14:37.820 06:54:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:37.820 06:54:44 -- nvmf/common.sh@295 -- # e810=() 00:14:37.820 06:54:44 -- nvmf/common.sh@295 -- # local -ga e810 00:14:37.820 06:54:44 -- nvmf/common.sh@296 -- # x722=() 00:14:37.820 06:54:44 -- nvmf/common.sh@296 -- # local -ga x722 00:14:37.820 06:54:44 -- nvmf/common.sh@297 -- # mlx=() 00:14:37.820 06:54:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:37.820 06:54:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.820 06:54:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.820 06:54:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.820 06:54:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.820 06:54:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.820 06:54:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.820 06:54:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.820 06:54:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.820 06:54:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.820 06:54:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.820 06:54:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.820 06:54:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:37.820 06:54:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:37.820 06:54:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:37.820 06:54:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:37.820 06:54:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:37.820 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:37.820 06:54:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:37.820 06:54:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:37.820 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:37.820 06:54:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:37.820 06:54:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:37.820 06:54:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:37.820 06:54:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.820 06:54:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:37.820 06:54:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.821 06:54:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:37.821 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:37.821 06:54:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.821 06:54:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:37.821 06:54:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.821 06:54:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:37.821 06:54:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.821 06:54:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:37.821 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:37.821 06:54:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.821 06:54:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:37.821 06:54:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:37.821 06:54:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:37.821 06:54:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:37.821 06:54:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:37.821 06:54:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.821 06:54:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.821 06:54:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.821 06:54:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:37.821 06:54:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.821 06:54:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.821 06:54:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:37.821 06:54:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.821 06:54:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.821 06:54:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:37.821 06:54:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:37.821 06:54:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.821 06:54:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.821 06:54:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.821 06:54:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.821 06:54:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:37.821 06:54:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.821 06:54:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.821 06:54:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.821 06:54:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:37.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:14:37.821 00:14:37.821 --- 10.0.0.2 ping statistics --- 00:14:37.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.821 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:14:37.821 06:54:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:14:37.821 00:14:37.821 --- 10.0.0.1 ping statistics --- 00:14:37.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.821 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:37.821 06:54:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.821 06:54:44 -- nvmf/common.sh@410 -- # return 0 00:14:37.821 06:54:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:37.821 06:54:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.821 06:54:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:37.821 06:54:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:37.821 06:54:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.821 06:54:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:37.821 06:54:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:37.821 06:54:44 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:37.821 06:54:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:37.821 06:54:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:37.821 06:54:44 -- common/autotest_common.sh@10 -- # set +x 00:14:37.821 06:54:44 -- nvmf/common.sh@469 -- # nvmfpid=3016691 00:14:37.821 06:54:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:37.821 06:54:44 -- nvmf/common.sh@470 -- # waitforlisten 3016691 00:14:37.821 06:54:44 -- common/autotest_common.sh@819 -- # '[' -z 3016691 ']' 00:14:37.821 06:54:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.821 06:54:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:37.821 06:54:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.821 06:54:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:37.821 06:54:44 -- common/autotest_common.sh@10 -- # set +x 00:14:37.821 [2024-05-12 06:54:44.789022] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:37.821 [2024-05-12 06:54:44.789101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.821 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.821 [2024-05-12 06:54:44.852299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.080 [2024-05-12 06:54:44.968203] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:38.080 [2024-05-12 06:54:44.968370] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.080 [2024-05-12 06:54:44.968389] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.080 [2024-05-12 06:54:44.968403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.080 [2024-05-12 06:54:44.968493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.080 [2024-05-12 06:54:44.968564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.080 [2024-05-12 06:54:44.968655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.080 [2024-05-12 06:54:44.968657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.648 06:54:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:38.648 06:54:45 -- common/autotest_common.sh@852 -- # return 0 00:14:38.648 06:54:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:38.648 06:54:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:38.648 06:54:45 -- common/autotest_common.sh@10 -- # set +x 00:14:38.907 06:54:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.907 06:54:45 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:38.907 06:54:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.907 06:54:45 -- common/autotest_common.sh@10 -- # set +x 00:14:38.907 06:54:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.907 06:54:45 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:38.907 06:54:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.907 06:54:45 -- common/autotest_common.sh@10 -- # set +x 00:14:38.907 06:54:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.907 06:54:45 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:38.907 06:54:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.907 06:54:45 -- common/autotest_common.sh@10 -- # set +x 00:14:38.907 [2024-05-12 06:54:45.868210] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.907 06:54:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.907 06:54:45 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:38.907 06:54:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.907 06:54:45 -- common/autotest_common.sh@10 -- # set +x 00:14:38.907 Malloc0 00:14:38.907 06:54:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.907 06:54:45 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:38.907 06:54:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.907 06:54:45 -- common/autotest_common.sh@10 -- # set +x 00:14:38.907 06:54:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.907 06:54:45 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:38.907 06:54:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.907 06:54:45 -- common/autotest_common.sh@10 -- # set +x 00:14:38.907 06:54:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.907 06:54:45 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.907 06:54:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.907 06:54:45 -- common/autotest_common.sh@10 -- # set +x 00:14:38.907 [2024-05-12 06:54:45.927042] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.907 06:54:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.907 06:54:45 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3016854 00:14:38.907 06:54:45 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:38.907 06:54:45 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:38.907 06:54:45 -- target/bdev_io_wait.sh@30 -- # READ_PID=3016856 00:14:38.907 06:54:45 -- nvmf/common.sh@520 -- # config=() 00:14:38.907 06:54:45 -- nvmf/common.sh@520 -- # local subsystem config 00:14:38.907 06:54:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:38.907 06:54:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:38.907 { 00:14:38.907 "params": { 00:14:38.907 "name": "Nvme$subsystem", 00:14:38.907 "trtype": "$TEST_TRANSPORT", 00:14:38.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:38.907 "adrfam": "ipv4", 00:14:38.907 "trsvcid": "$NVMF_PORT", 00:14:38.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:38.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:38.907 "hdgst": ${hdgst:-false}, 00:14:38.907 "ddgst": ${ddgst:-false} 00:14:38.907 }, 00:14:38.907 "method": "bdev_nvme_attach_controller" 00:14:38.908 } 00:14:38.908 EOF 00:14:38.908 )") 00:14:38.908 06:54:45 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:38.908 06:54:45 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:38.908 06:54:45 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3016858 00:14:38.908 06:54:45 -- nvmf/common.sh@520 -- # config=() 00:14:38.908 06:54:45 -- nvmf/common.sh@520 -- # local subsystem config 00:14:38.908 06:54:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:38.908 06:54:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:38.908 { 00:14:38.908 "params": { 00:14:38.908 "name": "Nvme$subsystem", 00:14:38.908 "trtype": "$TEST_TRANSPORT", 00:14:38.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:38.908 "adrfam": "ipv4", 00:14:38.908 "trsvcid": "$NVMF_PORT", 00:14:38.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:38.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:38.908 "hdgst": ${hdgst:-false}, 00:14:38.908 "ddgst": ${ddgst:-false} 00:14:38.908 }, 00:14:38.908 "method": "bdev_nvme_attach_controller" 00:14:38.908 } 00:14:38.908 EOF 00:14:38.908 )") 00:14:38.908 06:54:45 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:38.908 06:54:45 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:38.908 06:54:45 -- nvmf/common.sh@542 -- # cat 00:14:38.908 06:54:45 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3016861 00:14:38.908 06:54:45 -- nvmf/common.sh@520 -- # config=() 00:14:38.908 06:54:45 -- target/bdev_io_wait.sh@35 -- # sync 00:14:38.908 06:54:45 -- nvmf/common.sh@520 -- # local subsystem config 00:14:38.908 06:54:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:38.908 06:54:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:38.908 { 00:14:38.908 "params": { 00:14:38.908 "name": "Nvme$subsystem", 00:14:38.908 "trtype": "$TEST_TRANSPORT", 00:14:38.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:38.908 "adrfam": "ipv4", 00:14:38.908 "trsvcid": "$NVMF_PORT", 00:14:38.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:38.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:38.908 "hdgst": ${hdgst:-false}, 00:14:38.908 "ddgst": ${ddgst:-false} 00:14:38.908 }, 00:14:38.908 "method": "bdev_nvme_attach_controller" 00:14:38.908 } 00:14:38.908 EOF 00:14:38.908 )") 00:14:38.908 06:54:45 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:38.908 06:54:45 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:38.908 06:54:45 -- nvmf/common.sh@542 -- # cat 00:14:38.908 06:54:45 -- nvmf/common.sh@520 -- # config=() 00:14:38.908 06:54:45 -- nvmf/common.sh@520 -- # local subsystem config 00:14:38.908 06:54:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:38.908 06:54:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:38.908 { 00:14:38.908 "params": { 00:14:38.908 "name": "Nvme$subsystem", 00:14:38.908 "trtype": "$TEST_TRANSPORT", 00:14:38.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:38.908 "adrfam": "ipv4", 00:14:38.908 "trsvcid": "$NVMF_PORT", 00:14:38.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:38.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:38.908 "hdgst": ${hdgst:-false}, 00:14:38.908 "ddgst": ${ddgst:-false} 00:14:38.908 }, 00:14:38.908 "method": "bdev_nvme_attach_controller" 00:14:38.908 } 00:14:38.908 EOF 00:14:38.908 )") 00:14:38.908 06:54:45 -- nvmf/common.sh@542 -- # cat 00:14:38.908 06:54:45 -- target/bdev_io_wait.sh@37 -- # wait 3016854 00:14:38.908 06:54:45 -- nvmf/common.sh@544 -- # jq . 00:14:38.908 06:54:45 -- nvmf/common.sh@542 -- # cat 00:14:38.908 06:54:45 -- nvmf/common.sh@544 -- # jq . 00:14:38.908 06:54:45 -- nvmf/common.sh@544 -- # jq . 00:14:38.908 06:54:45 -- nvmf/common.sh@545 -- # IFS=, 00:14:38.908 06:54:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:38.908 "params": { 00:14:38.908 "name": "Nvme1", 00:14:38.908 "trtype": "tcp", 00:14:38.908 "traddr": "10.0.0.2", 00:14:38.908 "adrfam": "ipv4", 00:14:38.908 "trsvcid": "4420", 00:14:38.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.908 "hdgst": false, 00:14:38.908 "ddgst": false 00:14:38.908 }, 00:14:38.908 "method": "bdev_nvme_attach_controller" 00:14:38.908 }' 00:14:38.908 06:54:45 -- nvmf/common.sh@545 -- # IFS=, 00:14:38.908 06:54:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:38.908 "params": { 00:14:38.908 "name": "Nvme1", 00:14:38.908 "trtype": "tcp", 00:14:38.908 "traddr": "10.0.0.2", 00:14:38.908 "adrfam": "ipv4", 00:14:38.908 "trsvcid": "4420", 00:14:38.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.908 "hdgst": false, 00:14:38.908 "ddgst": false 00:14:38.908 }, 00:14:38.908 "method": "bdev_nvme_attach_controller" 00:14:38.908 }' 00:14:38.908 06:54:45 -- nvmf/common.sh@544 -- # jq . 00:14:38.908 06:54:45 -- nvmf/common.sh@545 -- # IFS=, 00:14:38.908 06:54:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:38.908 "params": { 00:14:38.908 "name": "Nvme1", 00:14:38.908 "trtype": "tcp", 00:14:38.908 "traddr": "10.0.0.2", 00:14:38.908 "adrfam": "ipv4", 00:14:38.908 "trsvcid": "4420", 00:14:38.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.908 "hdgst": false, 00:14:38.908 "ddgst": false 00:14:38.908 }, 00:14:38.908 "method": "bdev_nvme_attach_controller" 00:14:38.908 }' 00:14:38.908 06:54:45 -- nvmf/common.sh@545 -- # IFS=, 00:14:38.908 06:54:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:38.908 "params": { 00:14:38.908 "name": "Nvme1", 00:14:38.908 "trtype": "tcp", 00:14:38.908 "traddr": "10.0.0.2", 00:14:38.908 "adrfam": "ipv4", 00:14:38.908 "trsvcid": "4420", 00:14:38.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.908 "hdgst": false, 00:14:38.908 "ddgst": false 00:14:38.908 }, 00:14:38.908 "method": "bdev_nvme_attach_controller" 00:14:38.908 }' 00:14:38.908 [2024-05-12 06:54:45.968553] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:38.908 [2024-05-12 06:54:45.968553] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:38.908 [2024-05-12 06:54:45.968553] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:38.908 [2024-05-12 06:54:45.968554] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:38.908 [2024-05-12 06:54:45.968643] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-12 06:54:45.968645] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-12 06:54:45.968645] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-12 06:54:45.968645] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:38.908 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:38.908 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:38.908 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:38.908 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.167 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.167 [2024-05-12 06:54:46.136594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.167 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.167 [2024-05-12 06:54:46.234856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:39.167 [2024-05-12 06:54:46.242329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.426 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.426 [2024-05-12 06:54:46.337667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:39.426 [2024-05-12 06:54:46.342625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.426 [2024-05-12 06:54:46.438706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:39.426 [2024-05-12 06:54:46.444881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.426 [2024-05-12 06:54:46.544835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:39.686 Running I/O for 1 seconds... 00:14:39.686 Running I/O for 1 seconds... 00:14:39.686 Running I/O for 1 seconds... 00:14:39.953 Running I/O for 1 seconds... 00:14:40.895 00:14:40.895 Latency(us) 00:14:40.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.895 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:40.895 Nvme1n1 : 1.00 197195.26 770.29 0.00 0.00 646.61 253.35 855.61 00:14:40.895 =================================================================================================================== 00:14:40.895 Total : 197195.26 770.29 0.00 0.00 646.61 253.35 855.61 00:14:40.895 00:14:40.895 Latency(us) 00:14:40.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.895 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:40.895 Nvme1n1 : 1.01 9419.84 36.80 0.00 0.00 13546.78 3737.98 26796.94 00:14:40.895 =================================================================================================================== 00:14:40.895 Total : 9419.84 36.80 0.00 0.00 13546.78 3737.98 26796.94 00:14:40.895 00:14:40.895 Latency(us) 00:14:40.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.895 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:40.895 Nvme1n1 : 1.02 5276.94 20.61 0.00 0.00 24046.36 8689.59 36311.80 00:14:40.895 =================================================================================================================== 00:14:40.895 Total : 5276.94 20.61 0.00 0.00 24046.36 8689.59 36311.80 00:14:40.895 00:14:40.895 Latency(us) 00:14:40.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.895 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:40.895 Nvme1n1 : 1.00 12049.69 47.07 0.00 0.00 10596.57 4538.97 19806.44 00:14:40.895 =================================================================================================================== 00:14:40.895 Total : 12049.69 47.07 0.00 0.00 10596.57 4538.97 19806.44 00:14:41.155 06:54:48 -- target/bdev_io_wait.sh@38 -- # wait 3016856 00:14:41.155 06:54:48 -- target/bdev_io_wait.sh@39 -- # wait 3016858 00:14:41.155 06:54:48 -- target/bdev_io_wait.sh@40 -- # wait 3016861 00:14:41.155 06:54:48 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.155 06:54:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.155 06:54:48 -- common/autotest_common.sh@10 -- # set +x 00:14:41.155 06:54:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.155 06:54:48 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:41.156 06:54:48 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:41.156 06:54:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:41.156 06:54:48 -- nvmf/common.sh@116 -- # sync 00:14:41.156 06:54:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:41.156 06:54:48 -- nvmf/common.sh@119 -- # set +e 00:14:41.156 06:54:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:41.156 06:54:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:41.156 rmmod nvme_tcp 00:14:41.156 rmmod nvme_fabrics 00:14:41.156 rmmod nvme_keyring 00:14:41.156 06:54:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:41.156 06:54:48 -- nvmf/common.sh@123 -- # set -e 00:14:41.156 06:54:48 -- nvmf/common.sh@124 -- # return 0 00:14:41.156 06:54:48 -- nvmf/common.sh@477 -- # '[' -n 3016691 ']' 00:14:41.156 06:54:48 -- nvmf/common.sh@478 -- # killprocess 3016691 00:14:41.156 06:54:48 -- common/autotest_common.sh@926 -- # '[' -z 3016691 ']' 00:14:41.156 06:54:48 -- common/autotest_common.sh@930 -- # kill -0 3016691 00:14:41.156 06:54:48 -- common/autotest_common.sh@931 -- # uname 00:14:41.156 06:54:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:41.156 06:54:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3016691 00:14:41.156 06:54:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:41.156 06:54:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:41.156 06:54:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3016691' 00:14:41.156 killing process with pid 3016691 00:14:41.156 06:54:48 -- common/autotest_common.sh@945 -- # kill 3016691 00:14:41.156 06:54:48 -- common/autotest_common.sh@950 -- # wait 3016691 00:14:41.414 06:54:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:41.414 06:54:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:41.414 06:54:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:41.414 06:54:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.414 06:54:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:41.414 06:54:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.414 06:54:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.414 06:54:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.948 06:54:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:43.948 00:14:43.948 real 0m8.017s 00:14:43.948 user 0m19.974s 00:14:43.948 sys 0m3.608s 00:14:43.948 06:54:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.948 06:54:50 -- common/autotest_common.sh@10 -- # set +x 00:14:43.948 ************************************ 00:14:43.948 END TEST nvmf_bdev_io_wait 00:14:43.948 ************************************ 00:14:43.948 06:54:50 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:43.948 06:54:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:43.948 06:54:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:43.948 06:54:50 -- common/autotest_common.sh@10 -- # set +x 00:14:43.948 ************************************ 00:14:43.948 START TEST nvmf_queue_depth 00:14:43.948 ************************************ 00:14:43.948 06:54:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:43.948 * Looking for test storage... 00:14:43.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.948 06:54:50 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.948 06:54:50 -- nvmf/common.sh@7 -- # uname -s 00:14:43.948 06:54:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.948 06:54:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.948 06:54:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.948 06:54:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.948 06:54:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.948 06:54:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.948 06:54:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.948 06:54:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.948 06:54:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.948 06:54:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.948 06:54:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.948 06:54:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.948 06:54:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.948 06:54:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.948 06:54:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.948 06:54:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.948 06:54:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.948 06:54:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.948 06:54:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.948 06:54:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.948 06:54:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.949 06:54:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.949 06:54:50 -- paths/export.sh@5 -- # export PATH 00:14:43.949 06:54:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.949 06:54:50 -- nvmf/common.sh@46 -- # : 0 00:14:43.949 06:54:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:43.949 06:54:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:43.949 06:54:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:43.949 06:54:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.949 06:54:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.949 06:54:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:43.949 06:54:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:43.949 06:54:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:43.949 06:54:50 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:43.949 06:54:50 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:43.949 06:54:50 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:43.949 06:54:50 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:43.949 06:54:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:43.949 06:54:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.949 06:54:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:43.949 06:54:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:43.949 06:54:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:43.949 06:54:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.949 06:54:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.949 06:54:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.949 06:54:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:43.949 06:54:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:43.949 06:54:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:43.949 06:54:50 -- common/autotest_common.sh@10 -- # set +x 00:14:45.855 06:54:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:45.855 06:54:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:45.855 06:54:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:45.855 06:54:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:45.855 06:54:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:45.855 06:54:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:45.855 06:54:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:45.855 06:54:52 -- nvmf/common.sh@294 -- # net_devs=() 00:14:45.855 06:54:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:45.855 06:54:52 -- nvmf/common.sh@295 -- # e810=() 00:14:45.855 06:54:52 -- nvmf/common.sh@295 -- # local -ga e810 00:14:45.855 06:54:52 -- nvmf/common.sh@296 -- # x722=() 00:14:45.855 06:54:52 -- nvmf/common.sh@296 -- # local -ga x722 00:14:45.855 06:54:52 -- nvmf/common.sh@297 -- # mlx=() 00:14:45.855 06:54:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:45.855 06:54:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.855 06:54:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.855 06:54:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.855 06:54:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.855 06:54:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.855 06:54:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.855 06:54:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.855 06:54:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.855 06:54:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.855 06:54:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.855 06:54:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.855 06:54:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:45.855 06:54:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:45.855 06:54:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:45.855 06:54:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:45.855 06:54:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:45.855 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:45.855 06:54:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:45.855 06:54:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:45.855 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:45.855 06:54:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:45.855 06:54:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:45.855 06:54:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.855 06:54:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:45.855 06:54:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.855 06:54:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:45.855 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:45.855 06:54:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.855 06:54:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:45.855 06:54:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.855 06:54:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:45.855 06:54:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.855 06:54:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:45.855 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:45.855 06:54:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.855 06:54:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:45.855 06:54:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:45.855 06:54:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:45.855 06:54:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.855 06:54:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.855 06:54:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.855 06:54:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:45.855 06:54:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.855 06:54:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.855 06:54:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:45.855 06:54:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.855 06:54:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.855 06:54:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:45.855 06:54:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:45.855 06:54:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.855 06:54:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.855 06:54:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.855 06:54:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.855 06:54:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:45.855 06:54:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.855 06:54:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.855 06:54:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.855 06:54:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:45.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:14:45.855 00:14:45.855 --- 10.0.0.2 ping statistics --- 00:14:45.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.855 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:14:45.855 06:54:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:14:45.855 00:14:45.855 --- 10.0.0.1 ping statistics --- 00:14:45.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.855 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:14:45.855 06:54:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.855 06:54:52 -- nvmf/common.sh@410 -- # return 0 00:14:45.855 06:54:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:45.855 06:54:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.855 06:54:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:45.855 06:54:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.855 06:54:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:45.855 06:54:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:45.855 06:54:52 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:45.855 06:54:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:45.855 06:54:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:45.855 06:54:52 -- common/autotest_common.sh@10 -- # set +x 00:14:45.855 06:54:52 -- nvmf/common.sh@469 -- # nvmfpid=3019095 00:14:45.855 06:54:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:45.855 06:54:52 -- nvmf/common.sh@470 -- # waitforlisten 3019095 00:14:45.855 06:54:52 -- common/autotest_common.sh@819 -- # '[' -z 3019095 ']' 00:14:45.855 06:54:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.855 06:54:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:45.855 06:54:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.855 06:54:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:45.855 06:54:52 -- common/autotest_common.sh@10 -- # set +x 00:14:45.855 [2024-05-12 06:54:52.733073] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:45.855 [2024-05-12 06:54:52.733158] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.855 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.855 [2024-05-12 06:54:52.796652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.855 [2024-05-12 06:54:52.905449] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:45.855 [2024-05-12 06:54:52.905597] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.855 [2024-05-12 06:54:52.905613] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.855 [2024-05-12 06:54:52.905626] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.855 [2024-05-12 06:54:52.905651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.790 06:54:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:46.790 06:54:53 -- common/autotest_common.sh@852 -- # return 0 00:14:46.790 06:54:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:46.790 06:54:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:46.790 06:54:53 -- common/autotest_common.sh@10 -- # set +x 00:14:46.790 06:54:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.790 06:54:53 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.790 06:54:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.790 06:54:53 -- common/autotest_common.sh@10 -- # set +x 00:14:46.790 [2024-05-12 06:54:53.701810] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.790 06:54:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.790 06:54:53 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:46.790 06:54:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.790 06:54:53 -- common/autotest_common.sh@10 -- # set +x 00:14:46.790 Malloc0 00:14:46.790 06:54:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.790 06:54:53 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:46.790 06:54:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.790 06:54:53 -- common/autotest_common.sh@10 -- # set +x 00:14:46.790 06:54:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.790 06:54:53 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:46.790 06:54:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.790 06:54:53 -- common/autotest_common.sh@10 -- # set +x 00:14:46.790 06:54:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.790 06:54:53 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.790 06:54:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.790 06:54:53 -- common/autotest_common.sh@10 -- # set +x 00:14:46.790 [2024-05-12 06:54:53.766539] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.790 06:54:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.790 06:54:53 -- target/queue_depth.sh@30 -- # bdevperf_pid=3019251 00:14:46.790 06:54:53 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:46.790 06:54:53 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:46.790 06:54:53 -- target/queue_depth.sh@33 -- # waitforlisten 3019251 /var/tmp/bdevperf.sock 00:14:46.790 06:54:53 -- common/autotest_common.sh@819 -- # '[' -z 3019251 ']' 00:14:46.790 06:54:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.790 06:54:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:46.790 06:54:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.790 06:54:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:46.790 06:54:53 -- common/autotest_common.sh@10 -- # set +x 00:14:46.790 [2024-05-12 06:54:53.807296] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:46.790 [2024-05-12 06:54:53.807370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019251 ] 00:14:46.790 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.790 [2024-05-12 06:54:53.869545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.049 [2024-05-12 06:54:53.983675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.987 06:54:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:47.987 06:54:54 -- common/autotest_common.sh@852 -- # return 0 00:14:47.987 06:54:54 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:47.987 06:54:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.987 06:54:54 -- common/autotest_common.sh@10 -- # set +x 00:14:47.987 NVMe0n1 00:14:47.987 06:54:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.987 06:54:54 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:47.987 Running I/O for 10 seconds... 00:15:00.238 00:15:00.238 Latency(us) 00:15:00.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.238 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:00.238 Verification LBA range: start 0x0 length 0x4000 00:15:00.238 NVMe0n1 : 10.07 12425.11 48.54 0.00 0.00 82089.52 14078.10 59807.67 00:15:00.238 =================================================================================================================== 00:15:00.238 Total : 12425.11 48.54 0.00 0.00 82089.52 14078.10 59807.67 00:15:00.238 0 00:15:00.238 06:55:05 -- target/queue_depth.sh@39 -- # killprocess 3019251 00:15:00.238 06:55:05 -- common/autotest_common.sh@926 -- # '[' -z 3019251 ']' 00:15:00.238 06:55:05 -- common/autotest_common.sh@930 -- # kill -0 3019251 00:15:00.238 06:55:05 -- common/autotest_common.sh@931 -- # uname 00:15:00.238 06:55:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:00.238 06:55:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3019251 00:15:00.238 06:55:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:00.238 06:55:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:00.238 06:55:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3019251' 00:15:00.238 killing process with pid 3019251 00:15:00.238 06:55:05 -- common/autotest_common.sh@945 -- # kill 3019251 00:15:00.238 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.238 00:15:00.238 Latency(us) 00:15:00.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.238 =================================================================================================================== 00:15:00.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.238 06:55:05 -- common/autotest_common.sh@950 -- # wait 3019251 00:15:00.238 06:55:05 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:00.238 06:55:05 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:00.238 06:55:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:00.238 06:55:05 -- nvmf/common.sh@116 -- # sync 00:15:00.238 06:55:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:00.238 06:55:05 -- nvmf/common.sh@119 -- # set +e 00:15:00.238 06:55:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:00.238 06:55:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:00.238 rmmod nvme_tcp 00:15:00.238 rmmod nvme_fabrics 00:15:00.238 rmmod nvme_keyring 00:15:00.238 06:55:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:00.238 06:55:05 -- nvmf/common.sh@123 -- # set -e 00:15:00.238 06:55:05 -- nvmf/common.sh@124 -- # return 0 00:15:00.238 06:55:05 -- nvmf/common.sh@477 -- # '[' -n 3019095 ']' 00:15:00.238 06:55:05 -- nvmf/common.sh@478 -- # killprocess 3019095 00:15:00.238 06:55:05 -- common/autotest_common.sh@926 -- # '[' -z 3019095 ']' 00:15:00.238 06:55:05 -- common/autotest_common.sh@930 -- # kill -0 3019095 00:15:00.238 06:55:05 -- common/autotest_common.sh@931 -- # uname 00:15:00.238 06:55:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:00.238 06:55:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3019095 00:15:00.238 06:55:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:00.238 06:55:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:00.238 06:55:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3019095' 00:15:00.238 killing process with pid 3019095 00:15:00.238 06:55:05 -- common/autotest_common.sh@945 -- # kill 3019095 00:15:00.238 06:55:05 -- common/autotest_common.sh@950 -- # wait 3019095 00:15:00.238 06:55:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:00.238 06:55:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:00.238 06:55:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:00.238 06:55:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.238 06:55:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:00.238 06:55:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.238 06:55:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.238 06:55:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.807 06:55:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:00.807 00:15:00.807 real 0m17.260s 00:15:00.807 user 0m24.905s 00:15:00.807 sys 0m3.098s 00:15:00.807 06:55:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.807 06:55:07 -- common/autotest_common.sh@10 -- # set +x 00:15:00.807 ************************************ 00:15:00.807 END TEST nvmf_queue_depth 00:15:00.807 ************************************ 00:15:00.807 06:55:07 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:00.807 06:55:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:00.807 06:55:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.807 06:55:07 -- common/autotest_common.sh@10 -- # set +x 00:15:00.807 ************************************ 00:15:00.807 START TEST nvmf_multipath 00:15:00.807 ************************************ 00:15:00.807 06:55:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:00.807 * Looking for test storage... 00:15:00.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.807 06:55:07 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.807 06:55:07 -- nvmf/common.sh@7 -- # uname -s 00:15:00.807 06:55:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.807 06:55:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.807 06:55:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.807 06:55:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.807 06:55:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.807 06:55:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.807 06:55:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.807 06:55:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.807 06:55:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.807 06:55:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.807 06:55:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.807 06:55:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.807 06:55:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.807 06:55:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.807 06:55:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.807 06:55:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.807 06:55:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.807 06:55:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.807 06:55:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.807 06:55:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.807 06:55:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.807 06:55:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.807 06:55:07 -- paths/export.sh@5 -- # export PATH 00:15:00.807 06:55:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.807 06:55:07 -- nvmf/common.sh@46 -- # : 0 00:15:00.807 06:55:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:00.807 06:55:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:00.807 06:55:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:00.807 06:55:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.807 06:55:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.807 06:55:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:00.807 06:55:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:00.807 06:55:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:01.065 06:55:07 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:01.065 06:55:07 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:01.065 06:55:07 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:01.065 06:55:07 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.065 06:55:07 -- target/multipath.sh@43 -- # nvmftestinit 00:15:01.065 06:55:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:01.065 06:55:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.065 06:55:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:01.065 06:55:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:01.065 06:55:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:01.065 06:55:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.065 06:55:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.065 06:55:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.065 06:55:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:01.065 06:55:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:01.065 06:55:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:01.065 06:55:07 -- common/autotest_common.sh@10 -- # set +x 00:15:02.970 06:55:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:02.970 06:55:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:02.970 06:55:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:02.970 06:55:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:02.970 06:55:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:02.970 06:55:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:02.970 06:55:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:02.970 06:55:09 -- nvmf/common.sh@294 -- # net_devs=() 00:15:02.970 06:55:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:02.970 06:55:09 -- nvmf/common.sh@295 -- # e810=() 00:15:02.970 06:55:09 -- nvmf/common.sh@295 -- # local -ga e810 00:15:02.970 06:55:09 -- nvmf/common.sh@296 -- # x722=() 00:15:02.970 06:55:09 -- nvmf/common.sh@296 -- # local -ga x722 00:15:02.970 06:55:09 -- nvmf/common.sh@297 -- # mlx=() 00:15:02.970 06:55:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:02.970 06:55:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.970 06:55:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.970 06:55:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.970 06:55:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.970 06:55:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.970 06:55:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.971 06:55:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.971 06:55:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.971 06:55:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.971 06:55:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.971 06:55:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.971 06:55:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:02.971 06:55:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:02.971 06:55:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:02.971 06:55:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.971 06:55:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:02.971 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:02.971 06:55:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.971 06:55:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:02.971 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:02.971 06:55:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:02.971 06:55:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.971 06:55:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.971 06:55:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.971 06:55:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.971 06:55:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:02.971 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:02.971 06:55:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.971 06:55:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.971 06:55:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.971 06:55:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.971 06:55:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.971 06:55:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:02.971 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:02.971 06:55:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.971 06:55:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:02.971 06:55:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:02.971 06:55:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:02.971 06:55:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:02.971 06:55:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.971 06:55:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.971 06:55:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.971 06:55:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:02.971 06:55:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.971 06:55:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.971 06:55:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:02.971 06:55:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.971 06:55:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.971 06:55:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:02.971 06:55:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:02.971 06:55:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.971 06:55:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.971 06:55:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.971 06:55:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.971 06:55:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:02.971 06:55:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.971 06:55:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.971 06:55:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.971 06:55:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:02.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:15:02.971 00:15:02.971 --- 10.0.0.2 ping statistics --- 00:15:02.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.971 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:15:02.971 06:55:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:15:02.971 00:15:02.971 --- 10.0.0.1 ping statistics --- 00:15:02.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.971 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:15:02.971 06:55:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.971 06:55:10 -- nvmf/common.sh@410 -- # return 0 00:15:02.971 06:55:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:02.971 06:55:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.971 06:55:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:02.971 06:55:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:02.971 06:55:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.971 06:55:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:02.971 06:55:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:02.971 06:55:10 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:02.971 06:55:10 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:02.971 only one NIC for nvmf test 00:15:02.971 06:55:10 -- target/multipath.sh@47 -- # nvmftestfini 00:15:02.971 06:55:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:02.971 06:55:10 -- nvmf/common.sh@116 -- # sync 00:15:02.971 06:55:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:02.971 06:55:10 -- nvmf/common.sh@119 -- # set +e 00:15:02.971 06:55:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:02.971 06:55:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:02.971 rmmod nvme_tcp 00:15:02.971 rmmod nvme_fabrics 00:15:02.971 rmmod nvme_keyring 00:15:02.971 06:55:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:02.971 06:55:10 -- nvmf/common.sh@123 -- # set -e 00:15:02.971 06:55:10 -- nvmf/common.sh@124 -- # return 0 00:15:02.971 06:55:10 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:15:02.971 06:55:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.971 06:55:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:02.971 06:55:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:02.971 06:55:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.971 06:55:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:02.971 06:55:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.971 06:55:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.971 06:55:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.510 06:55:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:05.510 06:55:12 -- target/multipath.sh@48 -- # exit 0 00:15:05.510 06:55:12 -- target/multipath.sh@1 -- # nvmftestfini 00:15:05.510 06:55:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:05.510 06:55:12 -- nvmf/common.sh@116 -- # sync 00:15:05.510 06:55:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:05.510 06:55:12 -- nvmf/common.sh@119 -- # set +e 00:15:05.510 06:55:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:05.510 06:55:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:05.510 06:55:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:05.510 06:55:12 -- nvmf/common.sh@123 -- # set -e 00:15:05.510 06:55:12 -- nvmf/common.sh@124 -- # return 0 00:15:05.510 06:55:12 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:15:05.510 06:55:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:05.510 06:55:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:05.510 06:55:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:05.510 06:55:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.510 06:55:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:05.510 06:55:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.510 06:55:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.510 06:55:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.510 06:55:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:05.510 00:15:05.510 real 0m4.289s 00:15:05.510 user 0m0.826s 00:15:05.510 sys 0m1.449s 00:15:05.510 06:55:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.510 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:15:05.510 ************************************ 00:15:05.510 END TEST nvmf_multipath 00:15:05.510 ************************************ 00:15:05.510 06:55:12 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:05.510 06:55:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:05.510 06:55:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:05.510 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:15:05.510 ************************************ 00:15:05.510 START TEST nvmf_zcopy 00:15:05.510 ************************************ 00:15:05.510 06:55:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:05.510 * Looking for test storage... 00:15:05.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.510 06:55:12 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.510 06:55:12 -- nvmf/common.sh@7 -- # uname -s 00:15:05.510 06:55:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.510 06:55:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.510 06:55:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.510 06:55:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.510 06:55:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.510 06:55:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.510 06:55:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.510 06:55:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.510 06:55:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.510 06:55:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.510 06:55:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.510 06:55:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.510 06:55:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.510 06:55:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.510 06:55:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.510 06:55:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.510 06:55:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.510 06:55:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.510 06:55:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.510 06:55:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.510 06:55:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.510 06:55:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.510 06:55:12 -- paths/export.sh@5 -- # export PATH 00:15:05.510 06:55:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.510 06:55:12 -- nvmf/common.sh@46 -- # : 0 00:15:05.510 06:55:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:05.510 06:55:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:05.510 06:55:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:05.510 06:55:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.510 06:55:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.510 06:55:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:05.510 06:55:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:05.510 06:55:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:05.510 06:55:12 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:05.510 06:55:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:05.510 06:55:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.510 06:55:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:05.510 06:55:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:05.510 06:55:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:05.510 06:55:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.510 06:55:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.510 06:55:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.510 06:55:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:05.510 06:55:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:05.510 06:55:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:05.510 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:15:07.412 06:55:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:07.412 06:55:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:07.412 06:55:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:07.412 06:55:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:07.412 06:55:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:07.412 06:55:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:07.412 06:55:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:07.412 06:55:14 -- nvmf/common.sh@294 -- # net_devs=() 00:15:07.412 06:55:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:07.412 06:55:14 -- nvmf/common.sh@295 -- # e810=() 00:15:07.412 06:55:14 -- nvmf/common.sh@295 -- # local -ga e810 00:15:07.412 06:55:14 -- nvmf/common.sh@296 -- # x722=() 00:15:07.412 06:55:14 -- nvmf/common.sh@296 -- # local -ga x722 00:15:07.412 06:55:14 -- nvmf/common.sh@297 -- # mlx=() 00:15:07.412 06:55:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:07.412 06:55:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:07.412 06:55:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:07.412 06:55:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:07.412 06:55:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:07.412 06:55:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:07.412 06:55:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:07.412 06:55:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:07.412 06:55:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:07.412 06:55:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:07.412 06:55:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:07.412 06:55:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:07.412 06:55:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:07.412 06:55:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:07.412 06:55:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:07.412 06:55:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:07.412 06:55:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:07.412 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:07.412 06:55:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:07.412 06:55:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:07.412 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:07.412 06:55:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:07.412 06:55:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:07.412 06:55:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.412 06:55:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:07.412 06:55:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.412 06:55:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:07.412 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:07.412 06:55:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.412 06:55:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:07.412 06:55:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.412 06:55:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:07.412 06:55:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.412 06:55:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:07.412 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:07.412 06:55:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.412 06:55:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:07.412 06:55:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:07.412 06:55:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:07.412 06:55:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:07.412 06:55:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.412 06:55:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:07.413 06:55:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:07.413 06:55:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:07.413 06:55:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:07.413 06:55:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:07.413 06:55:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:07.413 06:55:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:07.413 06:55:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.413 06:55:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:07.413 06:55:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:07.413 06:55:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:07.413 06:55:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:07.413 06:55:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:07.413 06:55:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:07.413 06:55:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:07.413 06:55:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:07.413 06:55:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:07.413 06:55:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:07.413 06:55:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:07.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:15:07.413 00:15:07.413 --- 10.0.0.2 ping statistics --- 00:15:07.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.413 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:15:07.413 06:55:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:07.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:15:07.413 00:15:07.413 --- 10.0.0.1 ping statistics --- 00:15:07.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.413 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:15:07.413 06:55:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.413 06:55:14 -- nvmf/common.sh@410 -- # return 0 00:15:07.413 06:55:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:07.413 06:55:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.413 06:55:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:07.413 06:55:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:07.413 06:55:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.413 06:55:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:07.413 06:55:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:07.413 06:55:14 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:07.413 06:55:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:07.413 06:55:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:07.413 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:15:07.413 06:55:14 -- nvmf/common.sh@469 -- # nvmfpid=3024500 00:15:07.413 06:55:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:07.413 06:55:14 -- nvmf/common.sh@470 -- # waitforlisten 3024500 00:15:07.413 06:55:14 -- common/autotest_common.sh@819 -- # '[' -z 3024500 ']' 00:15:07.413 06:55:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.413 06:55:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:07.413 06:55:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.413 06:55:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:07.413 06:55:14 -- common/autotest_common.sh@10 -- # set +x 00:15:07.413 [2024-05-12 06:55:14.410387] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:07.413 [2024-05-12 06:55:14.410471] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.413 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.413 [2024-05-12 06:55:14.473123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.671 [2024-05-12 06:55:14.582712] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:07.671 [2024-05-12 06:55:14.582843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.671 [2024-05-12 06:55:14.582859] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.671 [2024-05-12 06:55:14.582871] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.671 [2024-05-12 06:55:14.582895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.607 06:55:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:08.607 06:55:15 -- common/autotest_common.sh@852 -- # return 0 00:15:08.607 06:55:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:08.607 06:55:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:08.607 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.607 06:55:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.607 06:55:15 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:08.607 06:55:15 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:08.607 06:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.607 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.607 [2024-05-12 06:55:15.412287] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.607 06:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.607 06:55:15 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:08.607 06:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.607 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.607 06:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.607 06:55:15 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.607 06:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.607 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.607 [2024-05-12 06:55:15.428428] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.607 06:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.607 06:55:15 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:08.607 06:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.607 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.607 06:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.607 06:55:15 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:08.607 06:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.607 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.607 malloc0 00:15:08.607 06:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.607 06:55:15 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:08.607 06:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.607 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.607 06:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.607 06:55:15 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:08.607 06:55:15 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:08.607 06:55:15 -- nvmf/common.sh@520 -- # config=() 00:15:08.607 06:55:15 -- nvmf/common.sh@520 -- # local subsystem config 00:15:08.607 06:55:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:08.607 06:55:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:08.607 { 00:15:08.607 "params": { 00:15:08.607 "name": "Nvme$subsystem", 00:15:08.607 "trtype": "$TEST_TRANSPORT", 00:15:08.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:08.607 "adrfam": "ipv4", 00:15:08.607 "trsvcid": "$NVMF_PORT", 00:15:08.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:08.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:08.607 "hdgst": ${hdgst:-false}, 00:15:08.607 "ddgst": ${ddgst:-false} 00:15:08.607 }, 00:15:08.607 "method": "bdev_nvme_attach_controller" 00:15:08.607 } 00:15:08.607 EOF 00:15:08.607 )") 00:15:08.607 06:55:15 -- nvmf/common.sh@542 -- # cat 00:15:08.607 06:55:15 -- nvmf/common.sh@544 -- # jq . 00:15:08.607 06:55:15 -- nvmf/common.sh@545 -- # IFS=, 00:15:08.607 06:55:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:08.607 "params": { 00:15:08.607 "name": "Nvme1", 00:15:08.607 "trtype": "tcp", 00:15:08.607 "traddr": "10.0.0.2", 00:15:08.607 "adrfam": "ipv4", 00:15:08.607 "trsvcid": "4420", 00:15:08.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.607 "hdgst": false, 00:15:08.607 "ddgst": false 00:15:08.607 }, 00:15:08.607 "method": "bdev_nvme_attach_controller" 00:15:08.607 }' 00:15:08.607 [2024-05-12 06:55:15.505932] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:08.607 [2024-05-12 06:55:15.506016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3024659 ] 00:15:08.607 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.607 [2024-05-12 06:55:15.575303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.607 [2024-05-12 06:55:15.694828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.866 Running I/O for 10 seconds... 00:15:18.843 00:15:18.843 Latency(us) 00:15:18.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.843 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:18.843 Verification LBA range: start 0x0 length 0x1000 00:15:18.843 Nvme1n1 : 10.02 4749.02 37.10 0.00 0.00 26896.87 2657.85 36700.16 00:15:18.843 =================================================================================================================== 00:15:18.843 Total : 4749.02 37.10 0.00 0.00 26896.87 2657.85 36700.16 00:15:19.408 06:55:26 -- target/zcopy.sh@39 -- # perfpid=3026011 00:15:19.408 06:55:26 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:19.408 06:55:26 -- common/autotest_common.sh@10 -- # set +x 00:15:19.408 06:55:26 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:19.408 06:55:26 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:19.408 06:55:26 -- nvmf/common.sh@520 -- # config=() 00:15:19.408 06:55:26 -- nvmf/common.sh@520 -- # local subsystem config 00:15:19.408 06:55:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:19.408 06:55:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:19.408 { 00:15:19.408 "params": { 00:15:19.408 "name": "Nvme$subsystem", 00:15:19.408 "trtype": "$TEST_TRANSPORT", 00:15:19.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:19.408 "adrfam": "ipv4", 00:15:19.408 "trsvcid": "$NVMF_PORT", 00:15:19.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:19.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:19.408 "hdgst": ${hdgst:-false}, 00:15:19.408 "ddgst": ${ddgst:-false} 00:15:19.408 }, 00:15:19.408 "method": "bdev_nvme_attach_controller" 00:15:19.408 } 00:15:19.408 EOF 00:15:19.408 )") 00:15:19.408 06:55:26 -- nvmf/common.sh@542 -- # cat 00:15:19.408 [2024-05-12 06:55:26.261364] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.408 [2024-05-12 06:55:26.261409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.408 06:55:26 -- nvmf/common.sh@544 -- # jq . 00:15:19.408 06:55:26 -- nvmf/common.sh@545 -- # IFS=, 00:15:19.408 06:55:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:19.408 "params": { 00:15:19.408 "name": "Nvme1", 00:15:19.408 "trtype": "tcp", 00:15:19.408 "traddr": "10.0.0.2", 00:15:19.408 "adrfam": "ipv4", 00:15:19.408 "trsvcid": "4420", 00:15:19.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:19.408 "hdgst": false, 00:15:19.408 "ddgst": false 00:15:19.408 }, 00:15:19.408 "method": "bdev_nvme_attach_controller" 00:15:19.408 }' 00:15:19.408 [2024-05-12 06:55:26.269330] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.408 [2024-05-12 06:55:26.269357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.408 [2024-05-12 06:55:26.277350] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.408 [2024-05-12 06:55:26.277374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.408 [2024-05-12 06:55:26.285364] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.285385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.293382] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.293403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.294918] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:19.409 [2024-05-12 06:55:26.294989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026011 ] 00:15:19.409 [2024-05-12 06:55:26.301400] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.301419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.309423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.309442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.317443] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.317462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.409 [2024-05-12 06:55:26.325466] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.325485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.333505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.333529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.341527] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.341551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.349566] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.349604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.357107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.409 [2024-05-12 06:55:26.357574] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.357598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.365627] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.365665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.373631] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.373660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.381640] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.381665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.389662] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.389704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.397682] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.397714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.405712] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.405736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.413731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.413766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.421781] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.421810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.429800] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.429830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.437795] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.437819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.445806] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.445826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.453855] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.453876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.461856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.461876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.469880] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.469900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.473181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.409 [2024-05-12 06:55:26.477900] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.477920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.485923] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.485944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.493987] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.494027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.502010] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.502045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.510031] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.510084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.518071] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.518110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.526101] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.526140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.409 [2024-05-12 06:55:26.534101] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.409 [2024-05-12 06:55:26.534138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.542100] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.542126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.550153] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.550189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.558184] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.558223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.566168] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.566192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.574197] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.574228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.582231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.582259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.590247] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.590275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.598269] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.598296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.606292] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.606319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.614314] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.614341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.622333] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.622360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.630356] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.630380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.638379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.638404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.646400] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.646425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.654425] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.654450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.662447] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.662473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.670472] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.670497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.678493] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.678517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.686518] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.686543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.694541] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.694565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.702564] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.702588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.710594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.710621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.718616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.718640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.726638] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.668 [2024-05-12 06:55:26.726662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.668 [2024-05-12 06:55:26.734659] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.669 [2024-05-12 06:55:26.734683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.669 [2024-05-12 06:55:26.742682] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.669 [2024-05-12 06:55:26.742715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.669 [2024-05-12 06:55:26.750724] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.669 [2024-05-12 06:55:26.750760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.669 [2024-05-12 06:55:26.758755] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.669 [2024-05-12 06:55:26.758775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.669 [2024-05-12 06:55:26.766778] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.669 [2024-05-12 06:55:26.766803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.669 [2024-05-12 06:55:26.774792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.669 [2024-05-12 06:55:26.774815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.669 Running I/O for 5 seconds... 00:15:19.669 [2024-05-12 06:55:26.782847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.669 [2024-05-12 06:55:26.782874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.669 [2024-05-12 06:55:26.795990] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.669 [2024-05-12 06:55:26.796027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.806782] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.806810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.817624] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.817655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.828787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.828817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.840298] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.840328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.851849] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.851879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.863332] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.863362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.874139] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.874169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.885144] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.885174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.896401] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.896432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.907763] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.907794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.920755] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.920785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.930374] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.930404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.942208] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.942238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.952281] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.952313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.964093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.964124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.975466] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.975498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.986652] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.986683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:26.997649] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:26.997679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:27.008677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:27.008718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:27.019414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:27.019445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:27.029663] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:27.029693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:27.041380] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:27.041411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.927 [2024-05-12 06:55:27.052712] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.927 [2024-05-12 06:55:27.052747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.063652] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.063684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.074782] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.074813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.085683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.085723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.096072] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.096103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.107281] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.107311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.117886] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.117913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.128456] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.128484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.139080] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.139106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.150927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.150954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.160051] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.160077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.171195] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.171221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.183142] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.183169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.192538] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.192564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.203406] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.203431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.214246] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.214280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.223832] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.223859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.234947] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.234989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.246521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.246548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.255320] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.255347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.266496] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.266522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.278662] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.278715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.287873] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.287900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.299210] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.299236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.187 [2024-05-12 06:55:27.309910] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.187 [2024-05-12 06:55:27.309937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.320455] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.320482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.332753] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.332780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.341414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.341441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.352303] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.352329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.362931] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.362958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.375135] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.375162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.384105] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.384131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.395253] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.395280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.407121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.407147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.415874] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.415908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.426914] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.426941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.439554] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.439581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.449010] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.449036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.460117] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.460144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.469745] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.469773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.480771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.480798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.491234] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.491260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.503044] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.503071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.511947] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.511989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.522549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.522575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.532544] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.532571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.543159] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.543185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.555234] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.555260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.564917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.564944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.453 [2024-05-12 06:55:27.576303] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.453 [2024-05-12 06:55:27.576331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.586525] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.586566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.597868] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.597896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.609810] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.609838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.619437] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.619472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.630501] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.630529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.640188] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.640215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.651311] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.651339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.661560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.661587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.672444] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.672487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.682528] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.682555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.693274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.693302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.703257] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.703283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.713707] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.713734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.725996] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.726022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.734482] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.734508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.747371] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.747398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.757575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.757616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.767465] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.767493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.778470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.778497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.788789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.788817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.798971] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.798999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.809201] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.809228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.819548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.819588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.830530] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.830564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.841017] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.841045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.749 [2024-05-12 06:55:27.852227] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.749 [2024-05-12 06:55:27.852256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.862962] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.863005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.873285] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.873313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.883659] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.883687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.893926] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.893953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.904343] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.904371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.915083] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.915125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.924715] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.924743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.935746] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.935774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.945722] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.945749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.956745] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.956773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.967022] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.967057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.978063] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.978091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.988684] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.988720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:27.999076] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:27.999103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:28.011282] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:28.011309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:28.020216] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:28.020243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:28.031709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:28.031737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:28.042092] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:28.042118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:28.052670] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:28.052715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:28.064709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:28.064737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:28.073782] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:28.073810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:28.086552] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:28.086579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.008 [2024-05-12 06:55:28.095985] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.008 [2024-05-12 06:55:28.096015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.009 [2024-05-12 06:55:28.107255] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.009 [2024-05-12 06:55:28.107283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.009 [2024-05-12 06:55:28.117156] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.009 [2024-05-12 06:55:28.117184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.009 [2024-05-12 06:55:28.128557] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.009 [2024-05-12 06:55:28.128585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.139174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.139203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.149489] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.149516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.159924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.159951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.170438] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.170467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.180998] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.181025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.191528] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.191555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.202331] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.202359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.212369] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.212397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.223516] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.223544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.233658] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.233708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.244690] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.244727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.255234] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.255261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.266109] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.266138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.277247] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.277274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.287453] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.287479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.298054] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.298081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.308562] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.308588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.321366] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.321393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.330827] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.330854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.341756] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.341783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.352265] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.352292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.362978] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.363020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.373662] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.373713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.384193] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.384219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.268 [2024-05-12 06:55:28.393650] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.268 [2024-05-12 06:55:28.393692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.404383] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.404410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.414263] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.414291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.425441] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.425467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.437725] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.437753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.447143] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.447170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.458071] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.458098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.468941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.468968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.481146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.481172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.489946] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.489973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.501230] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.501256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.511381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.511407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.522047] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.522074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.532280] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.532306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.542908] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.542936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.553420] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.553447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.563572] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.563599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.573963] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.574005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.584132] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.584158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.593846] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.593873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.604594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.604621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.614956] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.614997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.625402] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.625428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.635134] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.635160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.645886] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.645913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-05-12 06:55:28.655690] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-05-12 06:55:28.655726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.788 [2024-05-12 06:55:28.666956] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.788 [2024-05-12 06:55:28.666997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.788 [2024-05-12 06:55:28.677351] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.788 [2024-05-12 06:55:28.677377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.788 [2024-05-12 06:55:28.687558] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.788 [2024-05-12 06:55:28.687585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.788 [2024-05-12 06:55:28.698081] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.698108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.708391] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.708418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.719262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.719288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.729733] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.729760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.742547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.742574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.751600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.751626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.762564] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.762591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.773078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.773105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.782838] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.782865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.794005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.794032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.804046] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.804072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.814774] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.814808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.824889] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.824916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.835227] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.835254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.847585] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.847612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.856928] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.856955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.867746] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.867773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.878103] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.878131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.888384] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.888410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.899498] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.899525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.789 [2024-05-12 06:55:28.909793] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.789 [2024-05-12 06:55:28.909820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.047 [2024-05-12 06:55:28.920638] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.047 [2024-05-12 06:55:28.920665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.047 [2024-05-12 06:55:28.931026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.047 [2024-05-12 06:55:28.931053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.047 [2024-05-12 06:55:28.943905] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:28.943933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:28.953646] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:28.953688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:28.964335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:28.964361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:28.974551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:28.974577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:28.985204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:28.985231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:28.995252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:28.995278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.005977] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.006019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.016261] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.016296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.026599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.026626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.037550] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.037577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.047689] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.047729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.058806] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.058833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.069686] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.069722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.080119] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.080146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.091031] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.091058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.101291] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.101318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.111190] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.111217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.122082] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.122110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.132633] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.132662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.143610] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.143637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.153831] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.153859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.164553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.164580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.048 [2024-05-12 06:55:29.175375] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.048 [2024-05-12 06:55:29.175402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.186263] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.186290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.196945] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.196972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.207368] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.207395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.220197] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.220230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.229954] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.229982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.241011] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.241053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.251830] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.251858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.262316] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.262343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.272412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.272443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.283322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.283365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.296177] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.296204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.306170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.306197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.317428] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.317456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.327735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.327762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.338941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.338968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.348599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.348626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.359372] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.359399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.369767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.369794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.380171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.380197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.390727] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.390754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.400967] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.401008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.411297] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.411323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.421027] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.421060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-05-12 06:55:29.431957] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-05-12 06:55:29.431985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.565 [2024-05-12 06:55:29.442467] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.565 [2024-05-12 06:55:29.442495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.565 [2024-05-12 06:55:29.453590] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.565 [2024-05-12 06:55:29.453617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.565 [2024-05-12 06:55:29.464739] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.565 [2024-05-12 06:55:29.464767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.565 [2024-05-12 06:55:29.475107] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.565 [2024-05-12 06:55:29.475134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.565 [2024-05-12 06:55:29.487211] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.565 [2024-05-12 06:55:29.487238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.565 [2024-05-12 06:55:29.496349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.565 [2024-05-12 06:55:29.496375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.565 [2024-05-12 06:55:29.507796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.565 [2024-05-12 06:55:29.507823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.565 [2024-05-12 06:55:29.518163] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.565 [2024-05-12 06:55:29.518190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.565 [2024-05-12 06:55:29.528626] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.565 [2024-05-12 06:55:29.528652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.541092] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.541119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.549643] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.549670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.560888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.560915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.573104] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.573130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.581907] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.581934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.593549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.593576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.603867] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.603894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.614749] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.614776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.625331] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.625357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.636329] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.636356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.648836] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.648863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.657903] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.657931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.668976] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.669018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.678622] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.678648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-05-12 06:55:29.689909] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-05-12 06:55:29.689937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.825 [2024-05-12 06:55:29.700590] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.825 [2024-05-12 06:55:29.700617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.825 [2024-05-12 06:55:29.711174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.825 [2024-05-12 06:55:29.711201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.825 [2024-05-12 06:55:29.724319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.825 [2024-05-12 06:55:29.724346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.734171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.734199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.744714] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.744742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.755328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.755355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.765871] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.765898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.775856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.775884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.786564] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.786591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.796594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.796621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.807841] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.807868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.817636] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.817662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.828612] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.828639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.838598] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.838624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.849102] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.849128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.861113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.861139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.869877] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.869904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.880966] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.881007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.891569] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.891596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.902498] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.902525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.913208] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.913234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.923801] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.923828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.936145] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.936171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.826 [2024-05-12 06:55:29.947351] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.826 [2024-05-12 06:55:29.947377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:29.956420] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:29.956451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:29.967649] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:29.967691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:29.977202] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:29.977229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:29.988328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:29.988357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:29.999089] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:29.999117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.010044] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.010076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.022288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.022317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.032014] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.032041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.043286] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.043313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.056192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.056223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.065552] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.065579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.076560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.076587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.086535] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.086563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.095885] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.095913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.106557] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.106584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.116505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.116532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.127840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.127868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.137952] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.137979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.149063] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.149089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.159644] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.159671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.170259] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.170286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.180636] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.180664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-05-12 06:55:30.190593] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-05-12 06:55:30.190635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.085 [2024-05-12 06:55:30.201781] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.085 [2024-05-12 06:55:30.201809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.085 [2024-05-12 06:55:30.212278] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.085 [2024-05-12 06:55:30.212306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-05-12 06:55:30.222521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-05-12 06:55:30.222549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.233061] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.233088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.243354] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.243381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.253971] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.254001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.266568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.266595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.276242] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.276270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.287364] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.287391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.297551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.297578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.308079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.308106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.321049] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.321075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.330465] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.330492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.341287] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.341314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.351333] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.351360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.362081] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.362110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.371991] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.372019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.382658] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.382685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.393090] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.393118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.404068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.404095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.414493] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.414519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.424914] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.424949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.435580] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.435622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.445820] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.445847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.455860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.455888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.343 [2024-05-12 06:55:30.466962] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.343 [2024-05-12 06:55:30.466990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.477172] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.477201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.488217] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.488245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.498288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.498315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.508717] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.508765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.521407] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.521434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.531016] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.531043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.541984] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.542012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.551656] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.551683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.562460] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.562501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.571843] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.571870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.582921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.582948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.593196] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.593225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.604108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.604136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.614114] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.614142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.624732] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.624766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.634988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.635016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.645803] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.645830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.656274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.656301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.666411] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.666439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.676957] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.676984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.689289] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.689317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.698308] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.698335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.711042] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.711068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-05-12 06:55:30.720998] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-05-12 06:55:30.721024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.731480] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.731508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.742203] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.742230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.752821] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.752849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.765315] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.765358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.774924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.774952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.786416] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.786443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.799318] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.799344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.808826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.808853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.819635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.819662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.829109] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.829142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.840164] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.840191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.850612] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.850638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.863079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.863106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.872443] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.872471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.883301] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.883328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.893345] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.893371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.904018] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.904045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.916178] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.916205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.925143] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.925170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.936377] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.936404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.946720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.946748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.961187] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.961214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.970583] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.970609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.859 [2024-05-12 06:55:30.981602] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.859 [2024-05-12 06:55:30.981629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.118 [2024-05-12 06:55:30.992074] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.118 [2024-05-12 06:55:30.992101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.118 [2024-05-12 06:55:31.002467] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.118 [2024-05-12 06:55:31.002494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.118 [2024-05-12 06:55:31.012336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.118 [2024-05-12 06:55:31.012378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.118 [2024-05-12 06:55:31.023178] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.118 [2024-05-12 06:55:31.023205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.118 [2024-05-12 06:55:31.033907] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.118 [2024-05-12 06:55:31.033943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.044572] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.044598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.056674] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.056726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.065913] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.065940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.077219] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.077245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.089771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.089798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.098737] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.098765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.111317] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.111343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.120743] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.120771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.131949] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.131991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.142488] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.142514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.152808] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.152835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.163759] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.163786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.174504] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.174530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.185032] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.185058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.195255] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.195281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.207809] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.207835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.216986] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.217028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.227635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.227661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-05-12 06:55:31.237812] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-05-12 06:55:31.237839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.248640] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.248668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.259156] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.259183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.269479] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.269506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.280187] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.280215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.290792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.290821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.301563] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.301591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.312139] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.312166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.322922] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.322950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.333398] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.333425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.343585] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.343611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.354155] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.354182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.365028] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.365055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.375438] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.375465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.385445] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.385472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.395475] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.395501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.405805] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.405832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.416426] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.416452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.426934] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.426987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.437824] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.437851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.448582] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.448609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.458534] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.458561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.468626] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.468652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.478033] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.478060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.488874] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.488902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-05-12 06:55:31.499181] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-05-12 06:55:31.499208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-05-12 06:55:31.509965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-05-12 06:55:31.510008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-05-12 06:55:31.519959] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-05-12 06:55:31.519987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-05-12 06:55:31.531263] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-05-12 06:55:31.531290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-05-12 06:55:31.542342] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-05-12 06:55:31.542369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-05-12 06:55:31.552897] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-05-12 06:55:31.552925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-05-12 06:55:31.563317] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-05-12 06:55:31.563343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-05-12 06:55:31.574724] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.574759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.585144] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.585171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.595620] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.595647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.608215] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.608242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.617290] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.617318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.630644] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.630670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.640775] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.640802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.650998] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.651024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.661094] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.661120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.671384] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.671410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.682125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.682151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.692212] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.692239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.703225] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.703251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.713631] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.713657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.723587] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.723613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.734694] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.734729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.745242] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.745269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-05-12 06:55:31.757971] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-05-12 06:55:31.757998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.768011] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.768039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.778930] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.778957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.788924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.788951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.798616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.798643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 00:15:24.898 Latency(us) 00:15:24.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.898 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:24.898 Nvme1n1 : 5.01 12071.53 94.31 0.00 0.00 10589.55 4538.97 22427.88 00:15:24.898 =================================================================================================================== 00:15:24.898 Total : 12071.53 94.31 0.00 0.00 10589.55 4538.97 22427.88 00:15:24.898 [2024-05-12 06:55:31.803205] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.803232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.811231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.811258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.819246] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.819272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.827295] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.827330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.835326] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.835368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.843343] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.843383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.851367] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.851407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.859385] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.859425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.867416] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.867458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.875431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.875471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.883453] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.883495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.891483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.891526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.899505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.899548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.907527] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.907569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.915547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.915588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.923565] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.923606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.931587] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.931628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.939612] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.939653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.947600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.947635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.955622] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.955646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.963641] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.963666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.971664] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.971688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.979678] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.979708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.987756] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.987797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:31.995769] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:31.995808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:32.003775] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:32.003798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:32.011786] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:32.011807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.898 [2024-05-12 06:55:32.019805] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.898 [2024-05-12 06:55:32.019825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.157 [2024-05-12 06:55:32.027817] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.157 [2024-05-12 06:55:32.027839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.157 [2024-05-12 06:55:32.035840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.157 [2024-05-12 06:55:32.035862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.157 [2024-05-12 06:55:32.043906] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.157 [2024-05-12 06:55:32.043948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.157 [2024-05-12 06:55:32.051928] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.157 [2024-05-12 06:55:32.051968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.157 [2024-05-12 06:55:32.059921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.157 [2024-05-12 06:55:32.059946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.157 [2024-05-12 06:55:32.067928] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.157 [2024-05-12 06:55:32.067949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.157 [2024-05-12 06:55:32.075952] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.157 [2024-05-12 06:55:32.075973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3026011) - No such process 00:15:25.157 06:55:32 -- target/zcopy.sh@49 -- # wait 3026011 00:15:25.157 06:55:32 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.157 06:55:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.157 06:55:32 -- common/autotest_common.sh@10 -- # set +x 00:15:25.157 06:55:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.157 06:55:32 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:25.157 06:55:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.157 06:55:32 -- common/autotest_common.sh@10 -- # set +x 00:15:25.157 delay0 00:15:25.157 06:55:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.157 06:55:32 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:25.157 06:55:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.157 06:55:32 -- common/autotest_common.sh@10 -- # set +x 00:15:25.157 06:55:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.157 06:55:32 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:25.157 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.157 [2024-05-12 06:55:32.156876] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:31.724 Initializing NVMe Controllers 00:15:31.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:31.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:31.724 Initialization complete. Launching workers. 00:15:31.724 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 128 00:15:31.724 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 415, failed to submit 33 00:15:31.724 success 235, unsuccess 180, failed 0 00:15:31.724 06:55:38 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:31.724 06:55:38 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:31.724 06:55:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:31.724 06:55:38 -- nvmf/common.sh@116 -- # sync 00:15:31.724 06:55:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:31.724 06:55:38 -- nvmf/common.sh@119 -- # set +e 00:15:31.724 06:55:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:31.724 06:55:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:31.724 rmmod nvme_tcp 00:15:31.724 rmmod nvme_fabrics 00:15:31.724 rmmod nvme_keyring 00:15:31.724 06:55:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:31.724 06:55:38 -- nvmf/common.sh@123 -- # set -e 00:15:31.724 06:55:38 -- nvmf/common.sh@124 -- # return 0 00:15:31.724 06:55:38 -- nvmf/common.sh@477 -- # '[' -n 3024500 ']' 00:15:31.724 06:55:38 -- nvmf/common.sh@478 -- # killprocess 3024500 00:15:31.724 06:55:38 -- common/autotest_common.sh@926 -- # '[' -z 3024500 ']' 00:15:31.724 06:55:38 -- common/autotest_common.sh@930 -- # kill -0 3024500 00:15:31.724 06:55:38 -- common/autotest_common.sh@931 -- # uname 00:15:31.724 06:55:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:31.724 06:55:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3024500 00:15:31.724 06:55:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:31.724 06:55:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:31.724 06:55:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3024500' 00:15:31.724 killing process with pid 3024500 00:15:31.724 06:55:38 -- common/autotest_common.sh@945 -- # kill 3024500 00:15:31.724 06:55:38 -- common/autotest_common.sh@950 -- # wait 3024500 00:15:31.724 06:55:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:31.724 06:55:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:31.724 06:55:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:31.724 06:55:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.724 06:55:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:31.724 06:55:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.724 06:55:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.724 06:55:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.632 06:55:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:33.632 00:15:33.632 real 0m28.517s 00:15:33.632 user 0m36.097s 00:15:33.632 sys 0m9.999s 00:15:33.632 06:55:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.632 06:55:40 -- common/autotest_common.sh@10 -- # set +x 00:15:33.632 ************************************ 00:15:33.632 END TEST nvmf_zcopy 00:15:33.632 ************************************ 00:15:33.632 06:55:40 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:33.632 06:55:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:33.632 06:55:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:33.632 06:55:40 -- common/autotest_common.sh@10 -- # set +x 00:15:33.632 ************************************ 00:15:33.632 START TEST nvmf_nmic 00:15:33.632 ************************************ 00:15:33.632 06:55:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:33.889 * Looking for test storage... 00:15:33.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.889 06:55:40 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.889 06:55:40 -- nvmf/common.sh@7 -- # uname -s 00:15:33.889 06:55:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.889 06:55:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.889 06:55:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.889 06:55:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.889 06:55:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.889 06:55:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.889 06:55:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.889 06:55:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.889 06:55:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.889 06:55:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.889 06:55:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.889 06:55:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.889 06:55:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.889 06:55:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.889 06:55:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.889 06:55:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:33.889 06:55:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.889 06:55:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.889 06:55:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.889 06:55:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.890 06:55:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.890 06:55:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.890 06:55:40 -- paths/export.sh@5 -- # export PATH 00:15:33.890 06:55:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.890 06:55:40 -- nvmf/common.sh@46 -- # : 0 00:15:33.890 06:55:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:33.890 06:55:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:33.890 06:55:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:33.890 06:55:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.890 06:55:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.890 06:55:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:33.890 06:55:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:33.890 06:55:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:33.890 06:55:40 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.890 06:55:40 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.890 06:55:40 -- target/nmic.sh@14 -- # nvmftestinit 00:15:33.890 06:55:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:33.890 06:55:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.890 06:55:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:33.890 06:55:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:33.890 06:55:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:33.890 06:55:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.890 06:55:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.890 06:55:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.890 06:55:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:33.890 06:55:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:33.890 06:55:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:33.890 06:55:40 -- common/autotest_common.sh@10 -- # set +x 00:15:35.791 06:55:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:35.791 06:55:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:35.791 06:55:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:35.791 06:55:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:35.791 06:55:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:35.791 06:55:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:35.791 06:55:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:35.791 06:55:42 -- nvmf/common.sh@294 -- # net_devs=() 00:15:35.791 06:55:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:35.791 06:55:42 -- nvmf/common.sh@295 -- # e810=() 00:15:35.791 06:55:42 -- nvmf/common.sh@295 -- # local -ga e810 00:15:35.791 06:55:42 -- nvmf/common.sh@296 -- # x722=() 00:15:35.791 06:55:42 -- nvmf/common.sh@296 -- # local -ga x722 00:15:35.791 06:55:42 -- nvmf/common.sh@297 -- # mlx=() 00:15:35.791 06:55:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:35.791 06:55:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.791 06:55:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.791 06:55:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.791 06:55:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.791 06:55:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.791 06:55:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.791 06:55:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.791 06:55:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.791 06:55:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.791 06:55:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.791 06:55:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.791 06:55:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:35.791 06:55:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:35.791 06:55:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:35.791 06:55:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:35.791 06:55:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:35.791 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:35.791 06:55:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:35.791 06:55:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:35.791 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:35.791 06:55:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:35.791 06:55:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:35.791 06:55:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.791 06:55:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:35.791 06:55:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.791 06:55:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:35.791 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:35.791 06:55:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.791 06:55:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:35.791 06:55:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.791 06:55:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:35.791 06:55:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.791 06:55:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:35.791 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:35.791 06:55:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.791 06:55:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:35.791 06:55:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:35.791 06:55:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:35.791 06:55:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:35.791 06:55:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.791 06:55:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.791 06:55:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.791 06:55:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:35.791 06:55:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.791 06:55:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.791 06:55:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:35.791 06:55:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.791 06:55:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.792 06:55:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:35.792 06:55:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:35.792 06:55:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.792 06:55:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.792 06:55:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.792 06:55:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.792 06:55:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:35.792 06:55:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.792 06:55:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.792 06:55:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.051 06:55:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:36.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:15:36.051 00:15:36.051 --- 10.0.0.2 ping statistics --- 00:15:36.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.051 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:15:36.051 06:55:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:15:36.051 00:15:36.051 --- 10.0.0.1 ping statistics --- 00:15:36.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.051 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:15:36.051 06:55:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.051 06:55:42 -- nvmf/common.sh@410 -- # return 0 00:15:36.051 06:55:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:36.051 06:55:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.051 06:55:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:36.051 06:55:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:36.051 06:55:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.051 06:55:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:36.051 06:55:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:36.051 06:55:42 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:36.051 06:55:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:36.051 06:55:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:36.051 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:15:36.051 06:55:42 -- nvmf/common.sh@469 -- # nvmfpid=3029433 00:15:36.051 06:55:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:36.051 06:55:42 -- nvmf/common.sh@470 -- # waitforlisten 3029433 00:15:36.051 06:55:42 -- common/autotest_common.sh@819 -- # '[' -z 3029433 ']' 00:15:36.051 06:55:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.051 06:55:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:36.051 06:55:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.051 06:55:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:36.051 06:55:42 -- common/autotest_common.sh@10 -- # set +x 00:15:36.051 [2024-05-12 06:55:42.995577] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:36.051 [2024-05-12 06:55:42.995651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.051 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.051 [2024-05-12 06:55:43.059962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.051 [2024-05-12 06:55:43.165965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:36.051 [2024-05-12 06:55:43.166101] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.051 [2024-05-12 06:55:43.166118] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.052 [2024-05-12 06:55:43.166130] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.052 [2024-05-12 06:55:43.166180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.052 [2024-05-12 06:55:43.166239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.052 [2024-05-12 06:55:43.166304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.052 [2024-05-12 06:55:43.166306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.987 06:55:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:36.987 06:55:43 -- common/autotest_common.sh@852 -- # return 0 00:15:36.987 06:55:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:36.987 06:55:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:36.987 06:55:43 -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 06:55:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.987 06:55:44 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.987 06:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.987 06:55:44 -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 [2024-05-12 06:55:44.011363] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.987 06:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.987 06:55:44 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:36.987 06:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.987 06:55:44 -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 Malloc0 00:15:36.987 06:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.987 06:55:44 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:36.987 06:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.987 06:55:44 -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 06:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.987 06:55:44 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:36.987 06:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.987 06:55:44 -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 06:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.987 06:55:44 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.987 06:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.987 06:55:44 -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 [2024-05-12 06:55:44.063312] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.987 06:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.987 06:55:44 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:36.987 test case1: single bdev can't be used in multiple subsystems 00:15:36.987 06:55:44 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:36.987 06:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.987 06:55:44 -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 06:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.987 06:55:44 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:36.987 06:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.987 06:55:44 -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 06:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.987 06:55:44 -- target/nmic.sh@28 -- # nmic_status=0 00:15:36.987 06:55:44 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:36.987 06:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.987 06:55:44 -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 [2024-05-12 06:55:44.087171] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:36.987 [2024-05-12 06:55:44.087200] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:36.987 [2024-05-12 06:55:44.087221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.987 request: 00:15:36.987 { 00:15:36.987 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:36.987 "namespace": { 00:15:36.987 "bdev_name": "Malloc0" 00:15:36.987 }, 00:15:36.987 "method": "nvmf_subsystem_add_ns", 00:15:36.987 "req_id": 1 00:15:36.987 } 00:15:36.987 Got JSON-RPC error response 00:15:36.987 response: 00:15:36.987 { 00:15:36.987 "code": -32602, 00:15:36.987 "message": "Invalid parameters" 00:15:36.987 } 00:15:36.987 06:55:44 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:36.987 06:55:44 -- target/nmic.sh@29 -- # nmic_status=1 00:15:36.987 06:55:44 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:36.987 06:55:44 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:36.987 Adding namespace failed - expected result. 00:15:36.987 06:55:44 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:36.987 test case2: host connect to nvmf target in multiple paths 00:15:36.987 06:55:44 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:36.987 06:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.987 06:55:44 -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 [2024-05-12 06:55:44.095289] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:36.987 06:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.987 06:55:44 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:37.925 06:55:44 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:38.202 06:55:45 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:38.202 06:55:45 -- common/autotest_common.sh@1177 -- # local i=0 00:15:38.202 06:55:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.202 06:55:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:38.202 06:55:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:40.747 06:55:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:40.747 06:55:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:40.747 06:55:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.747 06:55:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:40.747 06:55:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.747 06:55:47 -- common/autotest_common.sh@1187 -- # return 0 00:15:40.747 06:55:47 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:40.747 [global] 00:15:40.747 thread=1 00:15:40.747 invalidate=1 00:15:40.747 rw=write 00:15:40.747 time_based=1 00:15:40.747 runtime=1 00:15:40.747 ioengine=libaio 00:15:40.747 direct=1 00:15:40.747 bs=4096 00:15:40.747 iodepth=1 00:15:40.747 norandommap=0 00:15:40.747 numjobs=1 00:15:40.747 00:15:40.747 verify_dump=1 00:15:40.747 verify_backlog=512 00:15:40.747 verify_state_save=0 00:15:40.747 do_verify=1 00:15:40.747 verify=crc32c-intel 00:15:40.747 [job0] 00:15:40.747 filename=/dev/nvme0n1 00:15:40.747 Could not set queue depth (nvme0n1) 00:15:40.747 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:40.747 fio-3.35 00:15:40.747 Starting 1 thread 00:15:41.685 00:15:41.685 job0: (groupid=0, jobs=1): err= 0: pid=3030122: Sun May 12 06:55:48 2024 00:15:41.685 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:15:41.685 slat (nsec): min=8651, max=54978, avg=18287.41, stdev=3818.86 00:15:41.685 clat (usec): min=315, max=577, avg=382.29, stdev=38.79 00:15:41.685 lat (usec): min=325, max=595, avg=400.58, stdev=39.55 00:15:41.685 clat percentiles (usec): 00:15:41.685 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:15:41.685 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 379], 00:15:41.685 | 70.00th=[ 392], 80.00th=[ 396], 90.00th=[ 408], 95.00th=[ 486], 00:15:41.685 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 578], 99.95th=[ 578], 00:15:41.685 | 99.99th=[ 578] 00:15:41.685 write: IOPS=1528, BW=6114KiB/s (6261kB/s)(6120KiB/1001msec); 0 zone resets 00:15:41.685 slat (usec): min=9, max=33027, avg=48.80, stdev=843.75 00:15:41.685 clat (usec): min=195, max=513, avg=325.13, stdev=78.71 00:15:41.685 lat (usec): min=204, max=33338, avg=373.93, stdev=847.67 00:15:41.685 clat percentiles (usec): 00:15:41.685 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 239], 00:15:41.685 | 30.00th=[ 269], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 347], 00:15:41.685 | 70.00th=[ 371], 80.00th=[ 404], 90.00th=[ 445], 95.00th=[ 457], 00:15:41.685 | 99.00th=[ 474], 99.50th=[ 482], 99.90th=[ 502], 99.95th=[ 515], 00:15:41.685 | 99.99th=[ 515] 00:15:41.685 bw ( KiB/s): min= 5152, max= 5152, per=84.27%, avg=5152.00, stdev= 0.00, samples=1 00:15:41.685 iops : min= 1288, max= 1288, avg=1288.00, stdev= 0.00, samples=1 00:15:41.685 lat (usec) : 250=14.49%, 500=83.95%, 750=1.57% 00:15:41.685 cpu : usr=4.20%, sys=8.10%, ctx=2557, majf=0, minf=2 00:15:41.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:41.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.686 issued rwts: total=1024,1530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:41.686 00:15:41.686 Run status group 0 (all jobs): 00:15:41.686 READ: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:15:41.686 WRITE: bw=6114KiB/s (6261kB/s), 6114KiB/s-6114KiB/s (6261kB/s-6261kB/s), io=6120KiB (6267kB), run=1001-1001msec 00:15:41.686 00:15:41.686 Disk stats (read/write): 00:15:41.686 nvme0n1: ios=1076/1151, merge=0/0, ticks=1019/354, in_queue=1373, util=98.80% 00:15:41.686 06:55:48 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:41.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:41.686 06:55:48 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:41.686 06:55:48 -- common/autotest_common.sh@1198 -- # local i=0 00:15:41.686 06:55:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:41.686 06:55:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:41.686 06:55:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:41.686 06:55:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:41.686 06:55:48 -- common/autotest_common.sh@1210 -- # return 0 00:15:41.686 06:55:48 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:41.686 06:55:48 -- target/nmic.sh@53 -- # nvmftestfini 00:15:41.686 06:55:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:41.686 06:55:48 -- nvmf/common.sh@116 -- # sync 00:15:41.686 06:55:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:41.686 06:55:48 -- nvmf/common.sh@119 -- # set +e 00:15:41.686 06:55:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:41.686 06:55:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:41.686 rmmod nvme_tcp 00:15:41.686 rmmod nvme_fabrics 00:15:41.944 rmmod nvme_keyring 00:15:41.944 06:55:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:41.944 06:55:48 -- nvmf/common.sh@123 -- # set -e 00:15:41.944 06:55:48 -- nvmf/common.sh@124 -- # return 0 00:15:41.944 06:55:48 -- nvmf/common.sh@477 -- # '[' -n 3029433 ']' 00:15:41.944 06:55:48 -- nvmf/common.sh@478 -- # killprocess 3029433 00:15:41.944 06:55:48 -- common/autotest_common.sh@926 -- # '[' -z 3029433 ']' 00:15:41.944 06:55:48 -- common/autotest_common.sh@930 -- # kill -0 3029433 00:15:41.944 06:55:48 -- common/autotest_common.sh@931 -- # uname 00:15:41.944 06:55:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:41.944 06:55:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3029433 00:15:41.944 06:55:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:41.944 06:55:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:41.944 06:55:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3029433' 00:15:41.944 killing process with pid 3029433 00:15:41.944 06:55:48 -- common/autotest_common.sh@945 -- # kill 3029433 00:15:41.944 06:55:48 -- common/autotest_common.sh@950 -- # wait 3029433 00:15:42.204 06:55:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:42.204 06:55:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:42.204 06:55:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:42.204 06:55:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.204 06:55:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:42.204 06:55:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.204 06:55:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.204 06:55:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.115 06:55:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:44.115 00:15:44.115 real 0m10.455s 00:15:44.115 user 0m24.212s 00:15:44.115 sys 0m2.560s 00:15:44.115 06:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.115 06:55:51 -- common/autotest_common.sh@10 -- # set +x 00:15:44.115 ************************************ 00:15:44.115 END TEST nvmf_nmic 00:15:44.115 ************************************ 00:15:44.115 06:55:51 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:44.115 06:55:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:44.115 06:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:44.115 06:55:51 -- common/autotest_common.sh@10 -- # set +x 00:15:44.115 ************************************ 00:15:44.115 START TEST nvmf_fio_target 00:15:44.115 ************************************ 00:15:44.115 06:55:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:44.374 * Looking for test storage... 00:15:44.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.374 06:55:51 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.374 06:55:51 -- nvmf/common.sh@7 -- # uname -s 00:15:44.374 06:55:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.374 06:55:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.374 06:55:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.374 06:55:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.374 06:55:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.374 06:55:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.374 06:55:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.374 06:55:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.374 06:55:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.374 06:55:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.374 06:55:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.374 06:55:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.374 06:55:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.374 06:55:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.374 06:55:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.374 06:55:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.374 06:55:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.374 06:55:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.374 06:55:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.374 06:55:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.374 06:55:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.374 06:55:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.374 06:55:51 -- paths/export.sh@5 -- # export PATH 00:15:44.374 06:55:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.374 06:55:51 -- nvmf/common.sh@46 -- # : 0 00:15:44.374 06:55:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:44.374 06:55:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:44.374 06:55:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:44.374 06:55:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.374 06:55:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.374 06:55:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:44.374 06:55:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:44.374 06:55:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:44.374 06:55:51 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.374 06:55:51 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.374 06:55:51 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:44.374 06:55:51 -- target/fio.sh@16 -- # nvmftestinit 00:15:44.374 06:55:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:44.374 06:55:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.374 06:55:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:44.374 06:55:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:44.374 06:55:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:44.374 06:55:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.374 06:55:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.374 06:55:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.374 06:55:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:44.374 06:55:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:44.374 06:55:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:44.374 06:55:51 -- common/autotest_common.sh@10 -- # set +x 00:15:46.281 06:55:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:46.281 06:55:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:46.281 06:55:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:46.281 06:55:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:46.281 06:55:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:46.281 06:55:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:46.281 06:55:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:46.281 06:55:53 -- nvmf/common.sh@294 -- # net_devs=() 00:15:46.281 06:55:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:46.281 06:55:53 -- nvmf/common.sh@295 -- # e810=() 00:15:46.281 06:55:53 -- nvmf/common.sh@295 -- # local -ga e810 00:15:46.281 06:55:53 -- nvmf/common.sh@296 -- # x722=() 00:15:46.281 06:55:53 -- nvmf/common.sh@296 -- # local -ga x722 00:15:46.281 06:55:53 -- nvmf/common.sh@297 -- # mlx=() 00:15:46.281 06:55:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:46.281 06:55:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:46.281 06:55:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:46.281 06:55:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:46.281 06:55:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:46.281 06:55:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:46.281 06:55:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:46.281 06:55:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:46.281 06:55:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:46.281 06:55:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:46.281 06:55:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:46.281 06:55:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:46.281 06:55:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:46.281 06:55:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:46.281 06:55:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:46.281 06:55:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:46.281 06:55:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:46.281 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:46.281 06:55:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:46.281 06:55:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:46.281 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:46.281 06:55:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:46.281 06:55:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:46.281 06:55:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:46.281 06:55:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.281 06:55:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:46.281 06:55:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.281 06:55:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:46.281 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:46.281 06:55:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.281 06:55:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:46.282 06:55:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.282 06:55:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:46.282 06:55:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.282 06:55:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:46.282 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:46.282 06:55:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.282 06:55:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:46.282 06:55:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:46.282 06:55:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:46.282 06:55:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:46.282 06:55:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:46.282 06:55:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.282 06:55:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.282 06:55:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:46.282 06:55:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:46.282 06:55:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:46.282 06:55:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:46.282 06:55:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:46.282 06:55:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:46.282 06:55:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.282 06:55:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:46.282 06:55:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:46.282 06:55:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:46.282 06:55:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:46.282 06:55:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:46.282 06:55:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:46.282 06:55:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:46.282 06:55:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:46.282 06:55:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:46.282 06:55:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:46.282 06:55:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:46.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:15:46.282 00:15:46.282 --- 10.0.0.2 ping statistics --- 00:15:46.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.282 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:15:46.282 06:55:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:46.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:15:46.282 00:15:46.282 --- 10.0.0.1 ping statistics --- 00:15:46.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.282 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:15:46.282 06:55:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.282 06:55:53 -- nvmf/common.sh@410 -- # return 0 00:15:46.282 06:55:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:46.282 06:55:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.282 06:55:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:46.282 06:55:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:46.282 06:55:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.282 06:55:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:46.282 06:55:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:46.282 06:55:53 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:46.282 06:55:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:46.282 06:55:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:46.282 06:55:53 -- common/autotest_common.sh@10 -- # set +x 00:15:46.282 06:55:53 -- nvmf/common.sh@469 -- # nvmfpid=3032211 00:15:46.282 06:55:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:46.282 06:55:53 -- nvmf/common.sh@470 -- # waitforlisten 3032211 00:15:46.282 06:55:53 -- common/autotest_common.sh@819 -- # '[' -z 3032211 ']' 00:15:46.282 06:55:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.282 06:55:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:46.282 06:55:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.282 06:55:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:46.282 06:55:53 -- common/autotest_common.sh@10 -- # set +x 00:15:46.282 [2024-05-12 06:55:53.376122] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:46.282 [2024-05-12 06:55:53.376190] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.282 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.541 [2024-05-12 06:55:53.439236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.541 [2024-05-12 06:55:53.545660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:46.541 [2024-05-12 06:55:53.545795] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.541 [2024-05-12 06:55:53.545813] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.541 [2024-05-12 06:55:53.545824] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.541 [2024-05-12 06:55:53.545871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.541 [2024-05-12 06:55:53.545929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.541 [2024-05-12 06:55:53.545994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.541 [2024-05-12 06:55:53.545997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.477 06:55:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:47.477 06:55:54 -- common/autotest_common.sh@852 -- # return 0 00:15:47.477 06:55:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:47.477 06:55:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:47.477 06:55:54 -- common/autotest_common.sh@10 -- # set +x 00:15:47.477 06:55:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.477 06:55:54 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:47.736 [2024-05-12 06:55:54.638178] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.736 06:55:54 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:47.994 06:55:54 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:47.994 06:55:54 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:48.253 06:55:55 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:48.253 06:55:55 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:48.511 06:55:55 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:48.511 06:55:55 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:48.769 06:55:55 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:48.769 06:55:55 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:49.028 06:55:55 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:49.286 06:55:56 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:49.286 06:55:56 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:49.545 06:55:56 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:49.545 06:55:56 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:49.803 06:55:56 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:49.803 06:55:56 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:50.062 06:55:56 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:50.322 06:55:57 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:50.322 06:55:57 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:50.582 06:55:57 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:50.582 06:55:57 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:50.582 06:55:57 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.841 [2024-05-12 06:55:57.943958] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.841 06:55:57 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:51.099 06:55:58 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:51.358 06:55:58 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:51.927 06:55:58 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:51.927 06:55:58 -- common/autotest_common.sh@1177 -- # local i=0 00:15:51.927 06:55:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:51.927 06:55:58 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:15:51.927 06:55:58 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:15:51.927 06:55:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:54.467 06:56:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:54.467 06:56:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:54.467 06:56:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:54.467 06:56:01 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:15:54.467 06:56:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:54.467 06:56:01 -- common/autotest_common.sh@1187 -- # return 0 00:15:54.467 06:56:01 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:54.467 [global] 00:15:54.467 thread=1 00:15:54.467 invalidate=1 00:15:54.467 rw=write 00:15:54.467 time_based=1 00:15:54.467 runtime=1 00:15:54.467 ioengine=libaio 00:15:54.467 direct=1 00:15:54.467 bs=4096 00:15:54.467 iodepth=1 00:15:54.467 norandommap=0 00:15:54.467 numjobs=1 00:15:54.467 00:15:54.467 verify_dump=1 00:15:54.467 verify_backlog=512 00:15:54.467 verify_state_save=0 00:15:54.467 do_verify=1 00:15:54.467 verify=crc32c-intel 00:15:54.467 [job0] 00:15:54.467 filename=/dev/nvme0n1 00:15:54.467 [job1] 00:15:54.467 filename=/dev/nvme0n2 00:15:54.467 [job2] 00:15:54.467 filename=/dev/nvme0n3 00:15:54.467 [job3] 00:15:54.467 filename=/dev/nvme0n4 00:15:54.467 Could not set queue depth (nvme0n1) 00:15:54.467 Could not set queue depth (nvme0n2) 00:15:54.467 Could not set queue depth (nvme0n3) 00:15:54.467 Could not set queue depth (nvme0n4) 00:15:54.467 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:54.467 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:54.467 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:54.467 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:54.467 fio-3.35 00:15:54.467 Starting 4 threads 00:15:55.469 00:15:55.469 job0: (groupid=0, jobs=1): err= 0: pid=3033317: Sun May 12 06:56:02 2024 00:15:55.469 read: IOPS=79, BW=318KiB/s (326kB/s)(328KiB/1031msec) 00:15:55.469 slat (nsec): min=9165, max=71947, avg=26226.13, stdev=9456.54 00:15:55.469 clat (usec): min=447, max=42071, avg=10129.79, stdev=17534.76 00:15:55.469 lat (usec): min=479, max=42082, avg=10156.02, stdev=17534.07 00:15:55.469 clat percentiles (usec): 00:15:55.469 | 1.00th=[ 449], 5.00th=[ 510], 10.00th=[ 519], 20.00th=[ 537], 00:15:55.469 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 570], 60.00th=[ 586], 00:15:55.469 | 70.00th=[ 619], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:15:55.469 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:55.469 | 99.99th=[42206] 00:15:55.469 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:15:55.469 slat (nsec): min=7355, max=78174, avg=29202.87, stdev=12703.13 00:15:55.469 clat (usec): min=236, max=912, avg=349.68, stdev=76.93 00:15:55.469 lat (usec): min=246, max=950, avg=378.89, stdev=83.23 00:15:55.469 clat percentiles (usec): 00:15:55.469 | 1.00th=[ 241], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 269], 00:15:55.469 | 30.00th=[ 289], 40.00th=[ 334], 50.00th=[ 351], 60.00th=[ 375], 00:15:55.469 | 70.00th=[ 388], 80.00th=[ 408], 90.00th=[ 449], 95.00th=[ 474], 00:15:55.469 | 99.00th=[ 506], 99.50th=[ 553], 99.90th=[ 914], 99.95th=[ 914], 00:15:55.469 | 99.99th=[ 914] 00:15:55.469 bw ( KiB/s): min= 4096, max= 4096, per=25.85%, avg=4096.00, stdev= 0.00, samples=1 00:15:55.469 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:55.470 lat (usec) : 250=7.24%, 500=78.11%, 750=11.28%, 1000=0.17% 00:15:55.470 lat (msec) : 50=3.20% 00:15:55.470 cpu : usr=0.78%, sys=1.65%, ctx=594, majf=0, minf=2 00:15:55.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:55.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.470 issued rwts: total=82,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:55.470 job1: (groupid=0, jobs=1): err= 0: pid=3033321: Sun May 12 06:56:02 2024 00:15:55.470 read: IOPS=992, BW=3969KiB/s (4064kB/s)(4104KiB/1034msec) 00:15:55.470 slat (nsec): min=6519, max=56966, avg=17274.16, stdev=6637.75 00:15:55.470 clat (usec): min=336, max=40951, avg=522.80, stdev=1781.01 00:15:55.470 lat (usec): min=345, max=40979, avg=540.07, stdev=1781.04 00:15:55.470 clat percentiles (usec): 00:15:55.470 | 1.00th=[ 359], 5.00th=[ 367], 10.00th=[ 371], 20.00th=[ 379], 00:15:55.470 | 30.00th=[ 388], 40.00th=[ 408], 50.00th=[ 445], 60.00th=[ 465], 00:15:55.470 | 70.00th=[ 478], 80.00th=[ 498], 90.00th=[ 537], 95.00th=[ 562], 00:15:55.470 | 99.00th=[ 594], 99.50th=[ 717], 99.90th=[40633], 99.95th=[41157], 00:15:55.470 | 99.99th=[41157] 00:15:55.470 write: IOPS=1485, BW=5942KiB/s (6085kB/s)(6144KiB/1034msec); 0 zone resets 00:15:55.470 slat (usec): min=7, max=142, avg=19.20, stdev= 8.31 00:15:55.470 clat (usec): min=192, max=3565, avg=284.38, stdev=112.31 00:15:55.470 lat (usec): min=200, max=3586, avg=303.58, stdev=112.69 00:15:55.470 clat percentiles (usec): 00:15:55.470 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 241], 00:15:55.470 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:15:55.470 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 404], 95.00th=[ 469], 00:15:55.470 | 99.00th=[ 537], 99.50th=[ 578], 99.90th=[ 668], 99.95th=[ 3556], 00:15:55.470 | 99.99th=[ 3556] 00:15:55.470 bw ( KiB/s): min= 4744, max= 7544, per=38.78%, avg=6144.00, stdev=1979.90, samples=2 00:15:55.470 iops : min= 1186, max= 1886, avg=1536.00, stdev=494.97, samples=2 00:15:55.470 lat (usec) : 250=16.12%, 500=74.67%, 750=8.98%, 1000=0.12% 00:15:55.470 lat (msec) : 4=0.04%, 50=0.08% 00:15:55.470 cpu : usr=3.68%, sys=5.71%, ctx=2566, majf=0, minf=1 00:15:55.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:55.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.470 issued rwts: total=1026,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:55.470 job2: (groupid=0, jobs=1): err= 0: pid=3033322: Sun May 12 06:56:02 2024 00:15:55.470 read: IOPS=1110, BW=4440KiB/s (4547kB/s)(4560KiB/1027msec) 00:15:55.470 slat (nsec): min=5965, max=53296, avg=15835.48, stdev=5996.99 00:15:55.470 clat (usec): min=324, max=40982, avg=492.10, stdev=2076.65 00:15:55.470 lat (usec): min=331, max=40997, avg=507.93, stdev=2076.52 00:15:55.470 clat percentiles (usec): 00:15:55.470 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 359], 00:15:55.470 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 388], 00:15:55.470 | 70.00th=[ 392], 80.00th=[ 400], 90.00th=[ 416], 95.00th=[ 461], 00:15:55.470 | 99.00th=[ 562], 99.50th=[ 594], 99.90th=[41157], 99.95th=[41157], 00:15:55.470 | 99.99th=[41157] 00:15:55.470 write: IOPS=1495, BW=5982KiB/s (6126kB/s)(6144KiB/1027msec); 0 zone resets 00:15:55.470 slat (nsec): min=7308, max=67471, avg=18579.15, stdev=9202.14 00:15:55.470 clat (usec): min=201, max=437, avg=263.98, stdev=41.97 00:15:55.470 lat (usec): min=210, max=477, avg=282.56, stdev=48.38 00:15:55.470 clat percentiles (usec): 00:15:55.470 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:15:55.470 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 265], 00:15:55.470 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 363], 00:15:55.470 | 99.00th=[ 408], 99.50th=[ 420], 99.90th=[ 437], 99.95th=[ 437], 00:15:55.470 | 99.99th=[ 437] 00:15:55.470 bw ( KiB/s): min= 5152, max= 7136, per=38.78%, avg=6144.00, stdev=1402.90, samples=2 00:15:55.470 iops : min= 1288, max= 1784, avg=1536.00, stdev=350.72, samples=2 00:15:55.470 lat (usec) : 250=24.07%, 500=74.29%, 750=1.53% 00:15:55.470 lat (msec) : 50=0.11% 00:15:55.470 cpu : usr=4.39%, sys=5.17%, ctx=2676, majf=0, minf=1 00:15:55.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:55.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.470 issued rwts: total=1140,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:55.470 job3: (groupid=0, jobs=1): err= 0: pid=3033323: Sun May 12 06:56:02 2024 00:15:55.470 read: IOPS=321, BW=1287KiB/s (1317kB/s)(1320KiB/1026msec) 00:15:55.470 slat (nsec): min=8419, max=65370, avg=32812.55, stdev=6471.76 00:15:55.470 clat (usec): min=413, max=41747, avg=2350.70, stdev=8156.60 00:15:55.470 lat (usec): min=422, max=41761, avg=2383.51, stdev=8154.37 00:15:55.470 clat percentiles (usec): 00:15:55.470 | 1.00th=[ 441], 5.00th=[ 494], 10.00th=[ 529], 20.00th=[ 545], 00:15:55.470 | 30.00th=[ 578], 40.00th=[ 586], 50.00th=[ 627], 60.00th=[ 660], 00:15:55.470 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 865], 00:15:55.470 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:15:55.470 | 99.99th=[41681] 00:15:55.470 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:15:55.470 slat (nsec): min=9831, max=71378, avg=34907.95, stdev=11798.30 00:15:55.470 clat (usec): min=215, max=587, avg=417.39, stdev=88.52 00:15:55.470 lat (usec): min=229, max=629, avg=452.30, stdev=95.66 00:15:55.470 clat percentiles (usec): 00:15:55.470 | 1.00th=[ 235], 5.00th=[ 251], 10.00th=[ 262], 20.00th=[ 318], 00:15:55.470 | 30.00th=[ 400], 40.00th=[ 429], 50.00th=[ 445], 60.00th=[ 457], 00:15:55.470 | 70.00th=[ 469], 80.00th=[ 490], 90.00th=[ 519], 95.00th=[ 529], 00:15:55.470 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 586], 99.95th=[ 586], 00:15:55.470 | 99.99th=[ 586] 00:15:55.470 bw ( KiB/s): min= 4096, max= 4096, per=25.85%, avg=4096.00, stdev= 0.00, samples=1 00:15:55.470 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:55.470 lat (usec) : 250=2.97%, 500=50.36%, 750=40.26%, 1000=4.63% 00:15:55.470 lat (msec) : 4=0.12%, 50=1.66% 00:15:55.470 cpu : usr=1.46%, sys=2.83%, ctx=843, majf=0, minf=1 00:15:55.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:55.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.470 issued rwts: total=330,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:55.470 00:15:55.470 Run status group 0 (all jobs): 00:15:55.470 READ: bw=9973KiB/s (10.2MB/s), 318KiB/s-4440KiB/s (326kB/s-4547kB/s), io=10.1MiB (10.6MB), run=1026-1034msec 00:15:55.470 WRITE: bw=15.5MiB/s (16.2MB/s), 1986KiB/s-5982KiB/s (2034kB/s-6126kB/s), io=16.0MiB (16.8MB), run=1026-1034msec 00:15:55.470 00:15:55.470 Disk stats (read/write): 00:15:55.470 nvme0n1: ios=126/512, merge=0/0, ticks=628/171, in_queue=799, util=83.27% 00:15:55.470 nvme0n2: ios=1075/1094, merge=0/0, ticks=886/268, in_queue=1154, util=98.15% 00:15:55.470 nvme0n3: ios=1024/1310, merge=0/0, ticks=374/335, in_queue=709, util=87.83% 00:15:55.470 nvme0n4: ios=346/512, merge=0/0, ticks=1432/203, in_queue=1635, util=98.23% 00:15:55.470 06:56:02 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:55.470 [global] 00:15:55.470 thread=1 00:15:55.470 invalidate=1 00:15:55.470 rw=randwrite 00:15:55.470 time_based=1 00:15:55.470 runtime=1 00:15:55.470 ioengine=libaio 00:15:55.470 direct=1 00:15:55.470 bs=4096 00:15:55.470 iodepth=1 00:15:55.470 norandommap=0 00:15:55.470 numjobs=1 00:15:55.470 00:15:55.470 verify_dump=1 00:15:55.470 verify_backlog=512 00:15:55.470 verify_state_save=0 00:15:55.470 do_verify=1 00:15:55.470 verify=crc32c-intel 00:15:55.470 [job0] 00:15:55.470 filename=/dev/nvme0n1 00:15:55.470 [job1] 00:15:55.470 filename=/dev/nvme0n2 00:15:55.470 [job2] 00:15:55.470 filename=/dev/nvme0n3 00:15:55.470 [job3] 00:15:55.470 filename=/dev/nvme0n4 00:15:55.470 Could not set queue depth (nvme0n1) 00:15:55.470 Could not set queue depth (nvme0n2) 00:15:55.470 Could not set queue depth (nvme0n3) 00:15:55.470 Could not set queue depth (nvme0n4) 00:15:55.730 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.730 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.730 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.731 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.731 fio-3.35 00:15:55.731 Starting 4 threads 00:15:57.110 00:15:57.110 job0: (groupid=0, jobs=1): err= 0: pid=3033560: Sun May 12 06:56:03 2024 00:15:57.110 read: IOPS=571, BW=2286KiB/s (2341kB/s)(2288KiB/1001msec) 00:15:57.110 slat (nsec): min=7520, max=72344, avg=23251.50, stdev=9996.16 00:15:57.110 clat (usec): min=391, max=42084, avg=1250.60, stdev=5632.17 00:15:57.110 lat (usec): min=407, max=42117, avg=1273.85, stdev=5632.16 00:15:57.110 clat percentiles (usec): 00:15:57.110 | 1.00th=[ 400], 5.00th=[ 424], 10.00th=[ 429], 20.00th=[ 437], 00:15:57.110 | 30.00th=[ 445], 40.00th=[ 457], 50.00th=[ 461], 60.00th=[ 469], 00:15:57.110 | 70.00th=[ 474], 80.00th=[ 486], 90.00th=[ 506], 95.00th=[ 529], 00:15:57.110 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:57.110 | 99.99th=[42206] 00:15:57.110 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:15:57.110 slat (usec): min=7, max=885, avg=15.95, stdev=35.88 00:15:57.110 clat (usec): min=184, max=439, avg=242.16, stdev=37.56 00:15:57.110 lat (usec): min=192, max=1139, avg=258.12, stdev=54.56 00:15:57.110 clat percentiles (usec): 00:15:57.110 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:15:57.110 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 243], 00:15:57.110 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 314], 00:15:57.110 | 99.00th=[ 388], 99.50th=[ 420], 99.90th=[ 437], 99.95th=[ 441], 00:15:57.110 | 99.99th=[ 441] 00:15:57.110 bw ( KiB/s): min= 4096, max= 4096, per=25.50%, avg=4096.00, stdev= 0.00, samples=1 00:15:57.110 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:57.110 lat (usec) : 250=44.61%, 500=51.00%, 750=3.70% 00:15:57.110 lat (msec) : 50=0.69% 00:15:57.110 cpu : usr=1.40%, sys=2.80%, ctx=1600, majf=0, minf=1 00:15:57.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.110 issued rwts: total=572,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.110 job1: (groupid=0, jobs=1): err= 0: pid=3033561: Sun May 12 06:56:03 2024 00:15:57.110 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:15:57.110 slat (nsec): min=10212, max=35905, avg=21654.64, stdev=8694.57 00:15:57.110 clat (usec): min=413, max=41021, avg=39086.98, stdev=8638.83 00:15:57.110 lat (usec): min=431, max=41040, avg=39108.64, stdev=8639.71 00:15:57.110 clat percentiles (usec): 00:15:57.110 | 1.00th=[ 412], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:57.110 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:57.110 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:57.110 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:57.110 | 99.99th=[41157] 00:15:57.110 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:15:57.110 slat (nsec): min=8439, max=61073, avg=22413.29, stdev=6708.01 00:15:57.110 clat (usec): min=212, max=1082, avg=282.03, stdev=81.33 00:15:57.110 lat (usec): min=224, max=1107, avg=304.45, stdev=82.34 00:15:57.110 clat percentiles (usec): 00:15:57.110 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 247], 00:15:57.110 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:15:57.110 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 338], 95.00th=[ 408], 00:15:57.110 | 99.00th=[ 627], 99.50th=[ 857], 99.90th=[ 1090], 99.95th=[ 1090], 00:15:57.110 | 99.99th=[ 1090] 00:15:57.110 bw ( KiB/s): min= 4096, max= 4096, per=25.50%, avg=4096.00, stdev= 0.00, samples=1 00:15:57.110 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:57.110 lat (usec) : 250=25.47%, 500=68.54%, 750=1.12%, 1000=0.75% 00:15:57.110 lat (msec) : 2=0.19%, 50=3.93% 00:15:57.110 cpu : usr=0.69%, sys=1.57%, ctx=535, majf=0, minf=1 00:15:57.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.110 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.110 job2: (groupid=0, jobs=1): err= 0: pid=3033562: Sun May 12 06:56:03 2024 00:15:57.110 read: IOPS=1225, BW=4903KiB/s (5021kB/s)(4908KiB/1001msec) 00:15:57.110 slat (nsec): min=6773, max=72193, avg=24578.24, stdev=10036.14 00:15:57.110 clat (usec): min=390, max=571, avg=460.83, stdev=26.32 00:15:57.110 lat (usec): min=404, max=608, avg=485.41, stdev=31.45 00:15:57.110 clat percentiles (usec): 00:15:57.110 | 1.00th=[ 404], 5.00th=[ 420], 10.00th=[ 429], 20.00th=[ 437], 00:15:57.110 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 461], 60.00th=[ 469], 00:15:57.110 | 70.00th=[ 474], 80.00th=[ 482], 90.00th=[ 494], 95.00th=[ 506], 00:15:57.110 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 545], 99.95th=[ 570], 00:15:57.110 | 99.99th=[ 570] 00:15:57.110 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:57.110 slat (nsec): min=7272, max=81843, avg=14124.46, stdev=7070.86 00:15:57.110 clat (usec): min=197, max=885, avg=239.81, stdev=41.82 00:15:57.110 lat (usec): min=206, max=909, avg=253.93, stdev=45.44 00:15:57.110 clat percentiles (usec): 00:15:57.110 | 1.00th=[ 204], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:15:57.110 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:15:57.110 | 70.00th=[ 249], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 302], 00:15:57.110 | 99.00th=[ 375], 99.50th=[ 433], 99.90th=[ 717], 99.95th=[ 889], 00:15:57.110 | 99.99th=[ 889] 00:15:57.110 bw ( KiB/s): min= 7200, max= 7200, per=44.82%, avg=7200.00, stdev= 0.00, samples=1 00:15:57.110 iops : min= 1800, max= 1800, avg=1800.00, stdev= 0.00, samples=1 00:15:57.110 lat (usec) : 250=39.27%, 500=57.18%, 750=3.51%, 1000=0.04% 00:15:57.110 cpu : usr=2.60%, sys=5.50%, ctx=2764, majf=0, minf=2 00:15:57.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.110 issued rwts: total=1227,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.110 job3: (groupid=0, jobs=1): err= 0: pid=3033563: Sun May 12 06:56:03 2024 00:15:57.110 read: IOPS=791, BW=3167KiB/s (3243kB/s)(3192KiB/1008msec) 00:15:57.110 slat (nsec): min=5321, max=66812, avg=24687.44, stdev=10859.17 00:15:57.110 clat (usec): min=315, max=42448, avg=868.05, stdev=4363.70 00:15:57.110 lat (usec): min=322, max=42465, avg=892.74, stdev=4363.03 00:15:57.110 clat percentiles (usec): 00:15:57.110 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 359], 00:15:57.110 | 30.00th=[ 375], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 408], 00:15:57.110 | 70.00th=[ 420], 80.00th=[ 449], 90.00th=[ 474], 95.00th=[ 502], 00:15:57.111 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:57.111 | 99.99th=[42206] 00:15:57.111 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:15:57.111 slat (nsec): min=6605, max=70856, avg=16320.57, stdev=9025.44 00:15:57.111 clat (usec): min=198, max=1233, avg=261.64, stdev=77.14 00:15:57.111 lat (usec): min=208, max=1273, avg=277.96, stdev=81.86 00:15:57.111 clat percentiles (usec): 00:15:57.111 | 1.00th=[ 206], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:15:57.111 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 245], 00:15:57.111 | 70.00th=[ 258], 80.00th=[ 281], 90.00th=[ 359], 95.00th=[ 416], 00:15:57.111 | 99.00th=[ 502], 99.50th=[ 578], 99.90th=[ 906], 99.95th=[ 1237], 00:15:57.111 | 99.99th=[ 1237] 00:15:57.111 bw ( KiB/s): min= 424, max= 7768, per=25.50%, avg=4096.00, stdev=5192.99, samples=2 00:15:57.111 iops : min= 106, max= 1942, avg=1024.00, stdev=1298.25, samples=2 00:15:57.111 lat (usec) : 250=36.33%, 500=60.76%, 750=2.20%, 1000=0.16% 00:15:57.111 lat (msec) : 2=0.05%, 50=0.49% 00:15:57.111 cpu : usr=2.09%, sys=3.57%, ctx=1823, majf=0, minf=1 00:15:57.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.111 issued rwts: total=798,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.111 00:15:57.111 Run status group 0 (all jobs): 00:15:57.111 READ: bw=10.0MiB/s (10.5MB/s), 86.3KiB/s-4903KiB/s (88.3kB/s-5021kB/s), io=10.2MiB (10.7MB), run=1001-1020msec 00:15:57.111 WRITE: bw=15.7MiB/s (16.4MB/s), 2008KiB/s-6138KiB/s (2056kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1020msec 00:15:57.111 00:15:57.111 Disk stats (read/write): 00:15:57.111 nvme0n1: ios=555/512, merge=0/0, ticks=972/120, in_queue=1092, util=96.39% 00:15:57.111 nvme0n2: ios=63/512, merge=0/0, ticks=859/140, in_queue=999, util=96.44% 00:15:57.111 nvme0n3: ios=1072/1272, merge=0/0, ticks=668/301, in_queue=969, util=99.27% 00:15:57.111 nvme0n4: ios=851/1024, merge=0/0, ticks=1303/255, in_queue=1558, util=97.57% 00:15:57.111 06:56:03 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:57.111 [global] 00:15:57.111 thread=1 00:15:57.111 invalidate=1 00:15:57.111 rw=write 00:15:57.111 time_based=1 00:15:57.111 runtime=1 00:15:57.111 ioengine=libaio 00:15:57.111 direct=1 00:15:57.111 bs=4096 00:15:57.111 iodepth=128 00:15:57.111 norandommap=0 00:15:57.111 numjobs=1 00:15:57.111 00:15:57.111 verify_dump=1 00:15:57.111 verify_backlog=512 00:15:57.111 verify_state_save=0 00:15:57.111 do_verify=1 00:15:57.111 verify=crc32c-intel 00:15:57.111 [job0] 00:15:57.111 filename=/dev/nvme0n1 00:15:57.111 [job1] 00:15:57.111 filename=/dev/nvme0n2 00:15:57.111 [job2] 00:15:57.111 filename=/dev/nvme0n3 00:15:57.111 [job3] 00:15:57.111 filename=/dev/nvme0n4 00:15:57.111 Could not set queue depth (nvme0n1) 00:15:57.111 Could not set queue depth (nvme0n2) 00:15:57.111 Could not set queue depth (nvme0n3) 00:15:57.111 Could not set queue depth (nvme0n4) 00:15:57.111 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:57.111 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:57.111 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:57.111 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:57.111 fio-3.35 00:15:57.111 Starting 4 threads 00:15:58.488 00:15:58.488 job0: (groupid=0, jobs=1): err= 0: pid=3033793: Sun May 12 06:56:05 2024 00:15:58.488 read: IOPS=4492, BW=17.5MiB/s (18.4MB/s)(17.7MiB/1008msec) 00:15:58.488 slat (usec): min=2, max=11506, avg=108.01, stdev=699.10 00:15:58.488 clat (usec): min=3073, max=37630, avg=13793.90, stdev=4553.27 00:15:58.488 lat (usec): min=3077, max=37639, avg=13901.91, stdev=4588.25 00:15:58.488 clat percentiles (usec): 00:15:58.488 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10945], 00:15:58.488 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12780], 60.00th=[13435], 00:15:58.488 | 70.00th=[15008], 80.00th=[16581], 90.00th=[19006], 95.00th=[22414], 00:15:58.488 | 99.00th=[31851], 99.50th=[33162], 99.90th=[37487], 99.95th=[37487], 00:15:58.488 | 99.99th=[37487] 00:15:58.488 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:15:58.488 slat (usec): min=4, max=8197, avg=100.72, stdev=538.20 00:15:58.488 clat (usec): min=654, max=41082, avg=14190.35, stdev=8705.17 00:15:58.488 lat (usec): min=1226, max=41106, avg=14291.08, stdev=8753.73 00:15:58.488 clat percentiles (usec): 00:15:58.488 | 1.00th=[ 3294], 5.00th=[ 5407], 10.00th=[ 6652], 20.00th=[ 7635], 00:15:58.488 | 30.00th=[ 8717], 40.00th=[ 9634], 50.00th=[11076], 60.00th=[12518], 00:15:58.488 | 70.00th=[14746], 80.00th=[22676], 90.00th=[27395], 95.00th=[33424], 00:15:58.488 | 99.00th=[39584], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:15:58.488 | 99.99th=[41157] 00:15:58.488 bw ( KiB/s): min=16384, max=20480, per=30.42%, avg=18432.00, stdev=2896.31, samples=2 00:15:58.488 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:15:58.488 lat (usec) : 750=0.01% 00:15:58.488 lat (msec) : 2=0.39%, 4=0.46%, 10=29.18%, 20=55.66%, 50=14.30% 00:15:58.488 cpu : usr=5.26%, sys=9.14%, ctx=362, majf=0, minf=1 00:15:58.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:58.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:58.488 issued rwts: total=4528,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:58.488 job1: (groupid=0, jobs=1): err= 0: pid=3033794: Sun May 12 06:56:05 2024 00:15:58.488 read: IOPS=3169, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1009msec) 00:15:58.488 slat (usec): min=2, max=32902, avg=153.75, stdev=1312.16 00:15:58.488 clat (usec): min=2275, max=92367, avg=19750.99, stdev=11153.44 00:15:58.488 lat (usec): min=5640, max=92381, avg=19904.74, stdev=11175.96 00:15:58.488 clat percentiles (usec): 00:15:58.488 | 1.00th=[ 7439], 5.00th=[10028], 10.00th=[10945], 20.00th=[12125], 00:15:58.488 | 30.00th=[13173], 40.00th=[15008], 50.00th=[16450], 60.00th=[18482], 00:15:58.488 | 70.00th=[21103], 80.00th=[25560], 90.00th=[34866], 95.00th=[41157], 00:15:58.488 | 99.00th=[53216], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:15:58.488 | 99.99th=[92799] 00:15:58.488 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:15:58.488 slat (usec): min=4, max=72997, avg=134.76, stdev=1591.73 00:15:58.488 clat (usec): min=3430, max=85802, avg=18065.69, stdev=13459.35 00:15:58.488 lat (usec): min=3445, max=85846, avg=18200.45, stdev=13514.09 00:15:58.488 clat percentiles (usec): 00:15:58.488 | 1.00th=[ 4621], 5.00th=[ 7242], 10.00th=[ 7570], 20.00th=[10552], 00:15:58.488 | 30.00th=[12911], 40.00th=[13566], 50.00th=[14353], 60.00th=[15008], 00:15:58.488 | 70.00th=[16581], 80.00th=[20055], 90.00th=[35914], 95.00th=[41681], 00:15:58.488 | 99.00th=[77071], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:15:58.488 | 99.99th=[85459] 00:15:58.488 bw ( KiB/s): min=12584, max=16072, per=23.65%, avg=14328.00, stdev=2466.39, samples=2 00:15:58.488 iops : min= 3146, max= 4018, avg=3582.00, stdev=616.60, samples=2 00:15:58.488 lat (msec) : 4=0.21%, 10=12.12%, 20=60.60%, 50=24.30%, 100=2.77% 00:15:58.488 cpu : usr=4.27%, sys=5.95%, ctx=252, majf=0, minf=1 00:15:58.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:58.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:58.489 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:58.489 job2: (groupid=0, jobs=1): err= 0: pid=3033795: Sun May 12 06:56:05 2024 00:15:58.489 read: IOPS=2446, BW=9788KiB/s (10.0MB/s)(9876KiB/1009msec) 00:15:58.489 slat (usec): min=2, max=25427, avg=258.65, stdev=1693.97 00:15:58.489 clat (usec): min=503, max=82033, avg=32742.67, stdev=19351.99 00:15:58.489 lat (usec): min=8152, max=82048, avg=33001.32, stdev=19505.26 00:15:58.489 clat percentiles (usec): 00:15:58.489 | 1.00th=[ 8455], 5.00th=[11469], 10.00th=[12125], 20.00th=[12518], 00:15:58.489 | 30.00th=[17171], 40.00th=[20317], 50.00th=[26346], 60.00th=[36963], 00:15:58.489 | 70.00th=[47449], 80.00th=[56886], 90.00th=[61604], 95.00th=[63701], 00:15:58.489 | 99.00th=[67634], 99.50th=[69731], 99.90th=[77071], 99.95th=[79168], 00:15:58.489 | 99.99th=[82314] 00:15:58.489 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:15:58.489 slat (usec): min=3, max=26748, avg=134.68, stdev=958.28 00:15:58.489 clat (usec): min=1348, max=62702, avg=17065.23, stdev=8816.61 00:15:58.489 lat (usec): min=1360, max=62720, avg=17199.90, stdev=8888.50 00:15:58.489 clat percentiles (usec): 00:15:58.489 | 1.00th=[ 5538], 5.00th=[ 7832], 10.00th=[ 8979], 20.00th=[11207], 00:15:58.489 | 30.00th=[12256], 40.00th=[12649], 50.00th=[14222], 60.00th=[15139], 00:15:58.489 | 70.00th=[18744], 80.00th=[23725], 90.00th=[29230], 95.00th=[33817], 00:15:58.489 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53740], 99.95th=[60031], 00:15:58.489 | 99.99th=[62653] 00:15:58.489 bw ( KiB/s): min= 8192, max=12288, per=16.90%, avg=10240.00, stdev=2896.31, samples=2 00:15:58.489 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:15:58.489 lat (usec) : 750=0.02% 00:15:58.489 lat (msec) : 2=0.28%, 4=0.02%, 10=8.51%, 20=47.50%, 50=31.50% 00:15:58.489 lat (msec) : 100=12.17% 00:15:58.489 cpu : usr=2.68%, sys=4.07%, ctx=193, majf=0, minf=1 00:15:58.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:15:58.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:58.489 issued rwts: total=2469,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:58.489 job3: (groupid=0, jobs=1): err= 0: pid=3033796: Sun May 12 06:56:05 2024 00:15:58.489 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:15:58.489 slat (usec): min=2, max=14485, avg=103.17, stdev=581.54 00:15:58.489 clat (usec): min=8716, max=56564, avg=13510.61, stdev=5071.37 00:15:58.489 lat (usec): min=8725, max=56575, avg=13613.79, stdev=5086.64 00:15:58.489 clat percentiles (usec): 00:15:58.489 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10945], 00:15:58.489 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:15:58.489 | 70.00th=[13173], 80.00th=[13829], 90.00th=[18744], 95.00th=[19792], 00:15:58.489 | 99.00th=[37487], 99.50th=[37487], 99.90th=[56361], 99.95th=[56361], 00:15:58.489 | 99.99th=[56361] 00:15:58.489 write: IOPS=4517, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1003msec); 0 zone resets 00:15:58.489 slat (usec): min=3, max=47694, avg=121.67, stdev=897.51 00:15:58.489 clat (usec): min=389, max=78025, avg=14217.75, stdev=4468.12 00:15:58.489 lat (usec): min=3497, max=78074, avg=14339.41, stdev=4586.03 00:15:58.489 clat percentiles (usec): 00:15:58.489 | 1.00th=[ 7046], 5.00th=[10028], 10.00th=[10552], 20.00th=[11207], 00:15:58.489 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13304], 60.00th=[13698], 00:15:58.489 | 70.00th=[14484], 80.00th=[15139], 90.00th=[21890], 95.00th=[23462], 00:15:58.489 | 99.00th=[30540], 99.50th=[31589], 99.90th=[31851], 99.95th=[36963], 00:15:58.489 | 99.99th=[78119] 00:15:58.489 bw ( KiB/s): min=16384, max=18840, per=29.07%, avg=17612.00, stdev=1736.65, samples=2 00:15:58.489 iops : min= 4096, max= 4710, avg=4403.00, stdev=434.16, samples=2 00:15:58.489 lat (usec) : 500=0.01% 00:15:58.489 lat (msec) : 4=0.37%, 10=6.19%, 20=85.07%, 50=8.18%, 100=0.17% 00:15:58.489 cpu : usr=3.49%, sys=5.29%, ctx=556, majf=0, minf=1 00:15:58.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:58.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:58.489 issued rwts: total=4096,4531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:58.489 00:15:58.489 Run status group 0 (all jobs): 00:15:58.489 READ: bw=55.3MiB/s (58.0MB/s), 9788KiB/s-17.5MiB/s (10.0MB/s-18.4MB/s), io=55.8MiB (58.5MB), run=1003-1009msec 00:15:58.489 WRITE: bw=59.2MiB/s (62.0MB/s), 9.91MiB/s-17.9MiB/s (10.4MB/s-18.7MB/s), io=59.7MiB (62.6MB), run=1003-1009msec 00:15:58.489 00:15:58.489 Disk stats (read/write): 00:15:58.489 nvme0n1: ios=3625/3669, merge=0/0, ticks=49060/54766, in_queue=103826, util=98.30% 00:15:58.489 nvme0n2: ios=2603/3072, merge=0/0, ticks=49948/53537, in_queue=103485, util=100.00% 00:15:58.489 nvme0n3: ios=2114/2560, merge=0/0, ticks=19769/18046, in_queue=37815, util=97.71% 00:15:58.489 nvme0n4: ios=3560/3584, merge=0/0, ticks=14001/13474, in_queue=27475, util=98.21% 00:15:58.489 06:56:05 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:58.489 [global] 00:15:58.489 thread=1 00:15:58.489 invalidate=1 00:15:58.489 rw=randwrite 00:15:58.489 time_based=1 00:15:58.489 runtime=1 00:15:58.489 ioengine=libaio 00:15:58.489 direct=1 00:15:58.489 bs=4096 00:15:58.489 iodepth=128 00:15:58.489 norandommap=0 00:15:58.489 numjobs=1 00:15:58.489 00:15:58.489 verify_dump=1 00:15:58.489 verify_backlog=512 00:15:58.489 verify_state_save=0 00:15:58.489 do_verify=1 00:15:58.489 verify=crc32c-intel 00:15:58.489 [job0] 00:15:58.489 filename=/dev/nvme0n1 00:15:58.489 [job1] 00:15:58.489 filename=/dev/nvme0n2 00:15:58.489 [job2] 00:15:58.489 filename=/dev/nvme0n3 00:15:58.489 [job3] 00:15:58.489 filename=/dev/nvme0n4 00:15:58.489 Could not set queue depth (nvme0n1) 00:15:58.489 Could not set queue depth (nvme0n2) 00:15:58.489 Could not set queue depth (nvme0n3) 00:15:58.489 Could not set queue depth (nvme0n4) 00:15:58.489 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.489 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.489 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.489 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.489 fio-3.35 00:15:58.489 Starting 4 threads 00:15:59.871 00:15:59.871 job0: (groupid=0, jobs=1): err= 0: pid=3034070: Sun May 12 06:56:06 2024 00:15:59.871 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:15:59.871 slat (usec): min=2, max=43278, avg=186.65, stdev=1559.93 00:15:59.871 clat (usec): min=5497, max=78953, avg=23372.63, stdev=14620.71 00:15:59.871 lat (usec): min=5506, max=79025, avg=23559.27, stdev=14709.89 00:15:59.871 clat percentiles (usec): 00:15:59.871 | 1.00th=[ 6980], 5.00th=[ 9241], 10.00th=[11207], 20.00th=[14615], 00:15:59.871 | 30.00th=[15008], 40.00th=[16188], 50.00th=[17433], 60.00th=[19268], 00:15:59.871 | 70.00th=[22676], 80.00th=[35914], 90.00th=[49021], 95.00th=[54789], 00:15:59.871 | 99.00th=[67634], 99.50th=[71828], 99.90th=[71828], 99.95th=[74974], 00:15:59.871 | 99.99th=[79168] 00:15:59.871 write: IOPS=2988, BW=11.7MiB/s (12.2MB/s)(11.8MiB/1007msec); 0 zone resets 00:15:59.871 slat (usec): min=3, max=38837, avg=167.66, stdev=1392.65 00:15:59.871 clat (usec): min=544, max=71407, avg=22280.89, stdev=12828.89 00:15:59.871 lat (usec): min=3856, max=71419, avg=22448.55, stdev=12891.89 00:15:59.871 clat percentiles (usec): 00:15:59.871 | 1.00th=[ 7111], 5.00th=[10552], 10.00th=[13173], 20.00th=[13566], 00:15:59.871 | 30.00th=[14091], 40.00th=[15795], 50.00th=[16909], 60.00th=[19530], 00:15:59.871 | 70.00th=[22676], 80.00th=[30540], 90.00th=[46924], 95.00th=[52167], 00:15:59.871 | 99.00th=[55837], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:15:59.871 | 99.99th=[71828] 00:15:59.871 bw ( KiB/s): min=10952, max=12096, per=18.67%, avg=11524.00, stdev=808.93, samples=2 00:15:59.871 iops : min= 2738, max= 3024, avg=2881.00, stdev=202.23, samples=2 00:15:59.871 lat (usec) : 750=0.02% 00:15:59.871 lat (msec) : 4=0.13%, 10=5.28%, 20=58.32%, 50=27.64%, 100=8.62% 00:15:59.871 cpu : usr=2.19%, sys=4.87%, ctx=199, majf=0, minf=15 00:15:59.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:59.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.871 issued rwts: total=2560,3009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.871 job1: (groupid=0, jobs=1): err= 0: pid=3034092: Sun May 12 06:56:06 2024 00:15:59.871 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:15:59.871 slat (usec): min=3, max=10481, avg=121.97, stdev=712.04 00:15:59.871 clat (usec): min=6513, max=31129, avg=15577.55, stdev=4678.06 00:15:59.871 lat (usec): min=6528, max=33479, avg=15699.52, stdev=4728.89 00:15:59.871 clat percentiles (usec): 00:15:59.871 | 1.00th=[ 8291], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10552], 00:15:59.871 | 30.00th=[12518], 40.00th=[13960], 50.00th=[15270], 60.00th=[16188], 00:15:59.871 | 70.00th=[17695], 80.00th=[19792], 90.00th=[22676], 95.00th=[24511], 00:15:59.871 | 99.00th=[25297], 99.50th=[26608], 99.90th=[28181], 99.95th=[29754], 00:15:59.871 | 99.99th=[31065] 00:15:59.871 write: IOPS=3847, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1006msec); 0 zone resets 00:15:59.871 slat (usec): min=4, max=11964, avg=133.55, stdev=701.38 00:15:59.871 clat (usec): min=2262, max=83303, avg=18464.17, stdev=15860.96 00:15:59.871 lat (usec): min=5947, max=83336, avg=18597.72, stdev=15951.38 00:15:59.871 clat percentiles (usec): 00:15:59.871 | 1.00th=[ 6521], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[ 9241], 00:15:59.871 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11863], 60.00th=[13829], 00:15:59.871 | 70.00th=[18220], 80.00th=[23462], 90.00th=[32900], 95.00th=[59507], 00:15:59.871 | 99.00th=[80217], 99.50th=[82314], 99.90th=[83362], 99.95th=[83362], 00:15:59.871 | 99.99th=[83362] 00:15:59.871 bw ( KiB/s): min=13304, max=16640, per=24.26%, avg=14972.00, stdev=2358.91, samples=2 00:15:59.871 iops : min= 3326, max= 4160, avg=3743.00, stdev=589.73, samples=2 00:15:59.871 lat (msec) : 4=0.01%, 10=23.42%, 20=54.93%, 50=17.76%, 100=3.88% 00:15:59.871 cpu : usr=5.67%, sys=10.25%, ctx=372, majf=0, minf=9 00:15:59.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:59.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.871 issued rwts: total=3584,3871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.871 job2: (groupid=0, jobs=1): err= 0: pid=3034123: Sun May 12 06:56:06 2024 00:15:59.871 read: IOPS=3121, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1010msec) 00:15:59.871 slat (usec): min=2, max=73481, avg=145.99, stdev=1632.56 00:15:59.871 clat (msec): min=3, max=101, avg=21.32, stdev=15.85 00:15:59.871 lat (msec): min=3, max=101, avg=21.47, stdev=15.94 00:15:59.871 clat percentiles (msec): 00:15:59.871 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:15:59.871 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 18], 00:15:59.871 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 37], 95.00th=[ 52], 00:15:59.871 | 99.00th=[ 94], 99.50th=[ 94], 99.90th=[ 94], 99.95th=[ 102], 00:15:59.871 | 99.99th=[ 102] 00:15:59.871 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:15:59.871 slat (usec): min=3, max=22748, avg=90.44, stdev=871.94 00:15:59.871 clat (usec): min=1088, max=104353, avg=16870.04, stdev=13010.26 00:15:59.871 lat (usec): min=1098, max=104359, avg=16960.49, stdev=13046.98 00:15:59.871 clat percentiles (msec): 00:15:59.871 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:15:59.871 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 16], 00:15:59.871 | 70.00th=[ 19], 80.00th=[ 21], 90.00th=[ 27], 95.00th=[ 30], 00:15:59.871 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 96], 99.95th=[ 105], 00:15:59.871 | 99.99th=[ 105] 00:15:59.871 bw ( KiB/s): min=12536, max=15760, per=22.92%, avg=14148.00, stdev=2279.71, samples=2 00:15:59.871 iops : min= 3134, max= 3940, avg=3537.00, stdev=569.93, samples=2 00:15:59.871 lat (msec) : 2=0.15%, 4=0.99%, 10=9.90%, 20=61.36%, 50=23.26% 00:15:59.871 lat (msec) : 100=4.25%, 250=0.09% 00:15:59.871 cpu : usr=2.38%, sys=4.26%, ctx=190, majf=0, minf=15 00:15:59.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:59.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.871 issued rwts: total=3153,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.871 job3: (groupid=0, jobs=1): err= 0: pid=3034135: Sun May 12 06:56:06 2024 00:15:59.871 read: IOPS=4789, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1005msec) 00:15:59.871 slat (usec): min=3, max=10803, avg=94.46, stdev=596.24 00:15:59.871 clat (usec): min=3624, max=35040, avg=12306.90, stdev=3343.27 00:15:59.871 lat (usec): min=5085, max=35081, avg=12401.36, stdev=3370.76 00:15:59.871 clat percentiles (usec): 00:15:59.871 | 1.00th=[ 6194], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10421], 00:15:59.871 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11469], 60.00th=[11863], 00:15:59.871 | 70.00th=[12649], 80.00th=[13698], 90.00th=[15926], 95.00th=[20841], 00:15:59.871 | 99.00th=[25822], 99.50th=[25822], 99.90th=[27132], 99.95th=[28443], 00:15:59.871 | 99.99th=[34866] 00:15:59.871 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:15:59.871 slat (usec): min=4, max=9911, avg=93.16, stdev=624.40 00:15:59.871 clat (usec): min=6312, max=30414, avg=13208.90, stdev=3531.12 00:15:59.871 lat (usec): min=6516, max=30434, avg=13302.05, stdev=3564.98 00:15:59.871 clat percentiles (usec): 00:15:59.871 | 1.00th=[ 7767], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:15:59.871 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12125], 60.00th=[12649], 00:15:59.871 | 70.00th=[13960], 80.00th=[15270], 90.00th=[19268], 95.00th=[20841], 00:15:59.871 | 99.00th=[25822], 99.50th=[26084], 99.90th=[28443], 99.95th=[29754], 00:15:59.871 | 99.99th=[30540] 00:15:59.871 bw ( KiB/s): min=20480, max=20521, per=33.22%, avg=20500.50, stdev=28.99, samples=2 00:15:59.871 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:15:59.871 lat (msec) : 4=0.01%, 10=8.78%, 20=84.06%, 50=7.15% 00:15:59.871 cpu : usr=8.17%, sys=13.35%, ctx=307, majf=0, minf=13 00:15:59.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:59.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.871 issued rwts: total=4813,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.871 00:15:59.871 Run status group 0 (all jobs): 00:15:59.871 READ: bw=54.6MiB/s (57.2MB/s), 9.93MiB/s-18.7MiB/s (10.4MB/s-19.6MB/s), io=55.1MiB (57.8MB), run=1005-1010msec 00:15:59.872 WRITE: bw=60.3MiB/s (63.2MB/s), 11.7MiB/s-19.9MiB/s (12.2MB/s-20.9MB/s), io=60.9MiB (63.8MB), run=1005-1010msec 00:15:59.872 00:15:59.872 Disk stats (read/write): 00:15:59.872 nvme0n1: ios=2088/2348, merge=0/0, ticks=21166/23697, in_queue=44863, util=99.90% 00:15:59.872 nvme0n2: ios=3112/3167, merge=0/0, ticks=23515/27341, in_queue=50856, util=100.00% 00:15:59.872 nvme0n3: ios=2560/2856, merge=0/0, ticks=47989/35486, in_queue=83475, util=87.83% 00:15:59.872 nvme0n4: ios=4120/4301, merge=0/0, ticks=24724/25084, in_queue=49808, util=99.47% 00:15:59.872 06:56:06 -- target/fio.sh@55 -- # sync 00:15:59.872 06:56:06 -- target/fio.sh@59 -- # fio_pid=3034296 00:15:59.872 06:56:06 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:59.872 06:56:06 -- target/fio.sh@61 -- # sleep 3 00:15:59.872 [global] 00:15:59.872 thread=1 00:15:59.872 invalidate=1 00:15:59.872 rw=read 00:15:59.872 time_based=1 00:15:59.872 runtime=10 00:15:59.872 ioengine=libaio 00:15:59.872 direct=1 00:15:59.872 bs=4096 00:15:59.872 iodepth=1 00:15:59.872 norandommap=1 00:15:59.872 numjobs=1 00:15:59.872 00:15:59.872 [job0] 00:15:59.872 filename=/dev/nvme0n1 00:15:59.872 [job1] 00:15:59.872 filename=/dev/nvme0n2 00:15:59.872 [job2] 00:15:59.872 filename=/dev/nvme0n3 00:15:59.872 [job3] 00:15:59.872 filename=/dev/nvme0n4 00:15:59.872 Could not set queue depth (nvme0n1) 00:15:59.872 Could not set queue depth (nvme0n2) 00:15:59.872 Could not set queue depth (nvme0n3) 00:15:59.872 Could not set queue depth (nvme0n4) 00:16:00.132 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.132 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.132 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.132 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.132 fio-3.35 00:16:00.132 Starting 4 threads 00:16:03.427 06:56:09 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:03.427 06:56:10 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:03.427 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=294912, buflen=4096 00:16:03.427 fio: pid=3034391, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:03.427 06:56:10 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:03.427 06:56:10 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:03.427 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=970752, buflen=4096 00:16:03.427 fio: pid=3034390, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:03.685 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=31563776, buflen=4096 00:16:03.685 fio: pid=3034386, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:03.685 06:56:10 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:03.685 06:56:10 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:03.942 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=17584128, buflen=4096 00:16:03.942 fio: pid=3034387, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:03.942 06:56:10 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:03.942 06:56:10 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:03.942 00:16:03.942 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3034386: Sun May 12 06:56:10 2024 00:16:03.942 read: IOPS=2269, BW=9077KiB/s (9294kB/s)(30.1MiB/3396msec) 00:16:03.942 slat (usec): min=4, max=1896, avg=12.68, stdev=22.37 00:16:03.942 clat (usec): min=298, max=41321, avg=424.72, stdev=1306.14 00:16:03.942 lat (usec): min=304, max=43018, avg=437.40, stdev=1314.09 00:16:03.942 clat percentiles (usec): 00:16:03.942 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 338], 00:16:03.942 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:16:03.942 | 70.00th=[ 388], 80.00th=[ 420], 90.00th=[ 461], 95.00th=[ 519], 00:16:03.942 | 99.00th=[ 627], 99.50th=[ 717], 99.90th=[40633], 99.95th=[41157], 00:16:03.942 | 99.99th=[41157] 00:16:03.942 bw ( KiB/s): min= 7624, max=11112, per=71.82%, avg=9588.00, stdev=1295.44, samples=6 00:16:03.942 iops : min= 1906, max= 2778, avg=2397.00, stdev=323.86, samples=6 00:16:03.942 lat (usec) : 500=93.15%, 750=6.49%, 1000=0.21% 00:16:03.942 lat (msec) : 2=0.04%, 50=0.10% 00:16:03.942 cpu : usr=1.80%, sys=4.54%, ctx=7708, majf=0, minf=1 00:16:03.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.942 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.942 issued rwts: total=7707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.942 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3034387: Sun May 12 06:56:10 2024 00:16:03.942 read: IOPS=1164, BW=4656KiB/s (4768kB/s)(16.8MiB/3688msec) 00:16:03.942 slat (usec): min=5, max=14775, avg=17.04, stdev=242.68 00:16:03.942 clat (usec): min=306, max=41391, avg=838.57, stdev=4238.86 00:16:03.942 lat (usec): min=313, max=55915, avg=855.61, stdev=4293.88 00:16:03.942 clat percentiles (usec): 00:16:03.942 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 343], 00:16:03.942 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 388], 00:16:03.942 | 70.00th=[ 400], 80.00th=[ 429], 90.00th=[ 469], 95.00th=[ 519], 00:16:03.942 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:03.942 | 99.99th=[41157] 00:16:03.942 bw ( KiB/s): min= 104, max=10016, per=36.69%, avg=4898.43, stdev=4620.80, samples=7 00:16:03.942 iops : min= 26, max= 2504, avg=1224.57, stdev=1155.24, samples=7 00:16:03.942 lat (usec) : 500=93.81%, 750=4.77%, 1000=0.23% 00:16:03.942 lat (msec) : 2=0.05%, 50=1.12% 00:16:03.942 cpu : usr=1.08%, sys=2.06%, ctx=4299, majf=0, minf=1 00:16:03.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.942 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.942 issued rwts: total=4294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.942 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3034390: Sun May 12 06:56:10 2024 00:16:03.942 read: IOPS=74, BW=299KiB/s (306kB/s)(948KiB/3175msec) 00:16:03.942 slat (nsec): min=7110, max=44185, avg=14052.97, stdev=8287.27 00:16:03.942 clat (usec): min=432, max=44215, avg=13374.14, stdev=19192.69 00:16:03.942 lat (usec): min=441, max=44233, avg=13388.15, stdev=19197.78 00:16:03.942 clat percentiles (usec): 00:16:03.942 | 1.00th=[ 437], 5.00th=[ 441], 10.00th=[ 449], 20.00th=[ 461], 00:16:03.942 | 30.00th=[ 465], 40.00th=[ 474], 50.00th=[ 482], 60.00th=[ 494], 00:16:03.942 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:16:03.942 | 99.00th=[42206], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:16:03.942 | 99.99th=[44303] 00:16:03.942 bw ( KiB/s): min= 96, max= 1376, per=2.31%, avg=309.33, stdev=522.56, samples=6 00:16:03.942 iops : min= 24, max= 344, avg=77.33, stdev=130.64, samples=6 00:16:03.942 lat (usec) : 500=61.34%, 750=7.14% 00:16:03.942 lat (msec) : 50=31.09% 00:16:03.942 cpu : usr=0.03%, sys=0.16%, ctx=238, majf=0, minf=1 00:16:03.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.942 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.942 issued rwts: total=238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.943 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.943 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3034391: Sun May 12 06:56:10 2024 00:16:03.943 read: IOPS=25, BW=98.9KiB/s (101kB/s)(288KiB/2913msec) 00:16:03.943 slat (nsec): min=13126, max=46745, avg=24008.11, stdev=9747.64 00:16:03.943 clat (usec): min=612, max=41102, avg=40410.29, stdev=4756.54 00:16:03.943 lat (usec): min=630, max=41148, avg=40434.42, stdev=4757.18 00:16:03.943 clat percentiles (usec): 00:16:03.943 | 1.00th=[ 611], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:03.943 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:03.943 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:03.943 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:03.943 | 99.99th=[41157] 00:16:03.943 bw ( KiB/s): min= 96, max= 104, per=0.74%, avg=99.20, stdev= 4.38, samples=5 00:16:03.943 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:16:03.943 lat (usec) : 750=1.37% 00:16:03.943 lat (msec) : 50=97.26% 00:16:03.943 cpu : usr=0.00%, sys=0.14%, ctx=74, majf=0, minf=1 00:16:03.943 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.943 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.943 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.943 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.943 00:16:03.943 Run status group 0 (all jobs): 00:16:03.943 READ: bw=13.0MiB/s (13.7MB/s), 98.9KiB/s-9077KiB/s (101kB/s-9294kB/s), io=48.1MiB (50.4MB), run=2913-3688msec 00:16:03.943 00:16:03.943 Disk stats (read/write): 00:16:03.943 nvme0n1: ios=7606/0, merge=0/0, ticks=3167/0, in_queue=3167, util=95.82% 00:16:03.943 nvme0n2: ios=4331/0, merge=0/0, ticks=4528/0, in_queue=4528, util=99.44% 00:16:03.943 nvme0n3: ios=235/0, merge=0/0, ticks=3086/0, in_queue=3086, util=96.72% 00:16:03.943 nvme0n4: ios=120/0, merge=0/0, ticks=4050/0, in_queue=4050, util=99.59% 00:16:04.200 06:56:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:04.201 06:56:11 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:04.458 06:56:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:04.458 06:56:11 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:04.716 06:56:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:04.716 06:56:11 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:04.973 06:56:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:04.973 06:56:11 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:05.230 06:56:12 -- target/fio.sh@69 -- # fio_status=0 00:16:05.230 06:56:12 -- target/fio.sh@70 -- # wait 3034296 00:16:05.230 06:56:12 -- target/fio.sh@70 -- # fio_status=4 00:16:05.231 06:56:12 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:05.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.231 06:56:12 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:05.231 06:56:12 -- common/autotest_common.sh@1198 -- # local i=0 00:16:05.231 06:56:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:05.231 06:56:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.231 06:56:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:05.231 06:56:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.231 06:56:12 -- common/autotest_common.sh@1210 -- # return 0 00:16:05.231 06:56:12 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:05.231 06:56:12 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:05.231 nvmf hotplug test: fio failed as expected 00:16:05.231 06:56:12 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.489 06:56:12 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:05.489 06:56:12 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:05.489 06:56:12 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:05.489 06:56:12 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:05.489 06:56:12 -- target/fio.sh@91 -- # nvmftestfini 00:16:05.489 06:56:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:05.489 06:56:12 -- nvmf/common.sh@116 -- # sync 00:16:05.489 06:56:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:05.489 06:56:12 -- nvmf/common.sh@119 -- # set +e 00:16:05.489 06:56:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:05.489 06:56:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:05.489 rmmod nvme_tcp 00:16:05.489 rmmod nvme_fabrics 00:16:05.489 rmmod nvme_keyring 00:16:05.745 06:56:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:05.745 06:56:12 -- nvmf/common.sh@123 -- # set -e 00:16:05.745 06:56:12 -- nvmf/common.sh@124 -- # return 0 00:16:05.745 06:56:12 -- nvmf/common.sh@477 -- # '[' -n 3032211 ']' 00:16:05.745 06:56:12 -- nvmf/common.sh@478 -- # killprocess 3032211 00:16:05.745 06:56:12 -- common/autotest_common.sh@926 -- # '[' -z 3032211 ']' 00:16:05.745 06:56:12 -- common/autotest_common.sh@930 -- # kill -0 3032211 00:16:05.745 06:56:12 -- common/autotest_common.sh@931 -- # uname 00:16:05.745 06:56:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.745 06:56:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3032211 00:16:05.745 06:56:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:05.745 06:56:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:05.745 06:56:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3032211' 00:16:05.745 killing process with pid 3032211 00:16:05.745 06:56:12 -- common/autotest_common.sh@945 -- # kill 3032211 00:16:05.745 06:56:12 -- common/autotest_common.sh@950 -- # wait 3032211 00:16:06.004 06:56:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:06.004 06:56:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:06.004 06:56:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:06.004 06:56:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.004 06:56:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:06.004 06:56:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.004 06:56:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.004 06:56:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.909 06:56:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:07.909 00:16:07.909 real 0m23.733s 00:16:07.909 user 1m22.401s 00:16:07.909 sys 0m6.886s 00:16:07.909 06:56:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.909 06:56:14 -- common/autotest_common.sh@10 -- # set +x 00:16:07.909 ************************************ 00:16:07.909 END TEST nvmf_fio_target 00:16:07.909 ************************************ 00:16:07.909 06:56:14 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:07.909 06:56:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:07.909 06:56:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:07.909 06:56:14 -- common/autotest_common.sh@10 -- # set +x 00:16:07.909 ************************************ 00:16:07.909 START TEST nvmf_bdevio 00:16:07.909 ************************************ 00:16:07.909 06:56:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:07.909 * Looking for test storage... 00:16:07.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.909 06:56:15 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.909 06:56:15 -- nvmf/common.sh@7 -- # uname -s 00:16:07.909 06:56:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.909 06:56:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.909 06:56:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.909 06:56:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.909 06:56:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.909 06:56:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.909 06:56:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.909 06:56:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.909 06:56:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.909 06:56:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.169 06:56:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.169 06:56:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.169 06:56:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.169 06:56:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.169 06:56:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.169 06:56:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.169 06:56:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.169 06:56:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.169 06:56:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.169 06:56:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.169 06:56:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.169 06:56:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.169 06:56:15 -- paths/export.sh@5 -- # export PATH 00:16:08.169 06:56:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.169 06:56:15 -- nvmf/common.sh@46 -- # : 0 00:16:08.169 06:56:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:08.169 06:56:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:08.169 06:56:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:08.169 06:56:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.169 06:56:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.169 06:56:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:08.169 06:56:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:08.169 06:56:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:08.169 06:56:15 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:08.169 06:56:15 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:08.169 06:56:15 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:08.169 06:56:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:08.169 06:56:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.169 06:56:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:08.169 06:56:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:08.169 06:56:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:08.169 06:56:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.169 06:56:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.169 06:56:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.169 06:56:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:08.169 06:56:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:08.169 06:56:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:08.169 06:56:15 -- common/autotest_common.sh@10 -- # set +x 00:16:10.072 06:56:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:10.072 06:56:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:10.072 06:56:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:10.072 06:56:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:10.072 06:56:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:10.072 06:56:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:10.072 06:56:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:10.072 06:56:17 -- nvmf/common.sh@294 -- # net_devs=() 00:16:10.073 06:56:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:10.073 06:56:17 -- nvmf/common.sh@295 -- # e810=() 00:16:10.073 06:56:17 -- nvmf/common.sh@295 -- # local -ga e810 00:16:10.073 06:56:17 -- nvmf/common.sh@296 -- # x722=() 00:16:10.073 06:56:17 -- nvmf/common.sh@296 -- # local -ga x722 00:16:10.073 06:56:17 -- nvmf/common.sh@297 -- # mlx=() 00:16:10.073 06:56:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:10.073 06:56:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.073 06:56:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.073 06:56:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.073 06:56:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.073 06:56:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.073 06:56:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.073 06:56:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.073 06:56:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.073 06:56:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.073 06:56:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.073 06:56:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.073 06:56:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:10.073 06:56:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:10.073 06:56:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:10.073 06:56:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:10.073 06:56:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:10.073 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:10.073 06:56:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:10.073 06:56:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:10.073 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:10.073 06:56:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:10.073 06:56:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:10.073 06:56:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.073 06:56:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:10.073 06:56:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.073 06:56:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:10.073 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:10.073 06:56:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.073 06:56:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:10.073 06:56:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.073 06:56:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:10.073 06:56:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.073 06:56:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:10.073 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:10.073 06:56:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.073 06:56:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:10.073 06:56:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:10.073 06:56:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:10.073 06:56:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:10.073 06:56:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.073 06:56:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.073 06:56:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.073 06:56:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:10.073 06:56:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.073 06:56:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.073 06:56:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:10.073 06:56:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.073 06:56:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.073 06:56:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:10.073 06:56:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:10.073 06:56:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.073 06:56:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.073 06:56:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.073 06:56:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.073 06:56:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:10.073 06:56:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.073 06:56:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.073 06:56:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.331 06:56:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:10.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:16:10.331 00:16:10.331 --- 10.0.0.2 ping statistics --- 00:16:10.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.331 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:16:10.331 06:56:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:16:10.331 00:16:10.331 --- 10.0.0.1 ping statistics --- 00:16:10.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.331 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:16:10.331 06:56:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.331 06:56:17 -- nvmf/common.sh@410 -- # return 0 00:16:10.331 06:56:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:10.331 06:56:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.331 06:56:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:10.331 06:56:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:10.331 06:56:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.331 06:56:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:10.331 06:56:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:10.331 06:56:17 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:10.331 06:56:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:10.331 06:56:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:10.331 06:56:17 -- common/autotest_common.sh@10 -- # set +x 00:16:10.331 06:56:17 -- nvmf/common.sh@469 -- # nvmfpid=3037034 00:16:10.331 06:56:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:10.331 06:56:17 -- nvmf/common.sh@470 -- # waitforlisten 3037034 00:16:10.331 06:56:17 -- common/autotest_common.sh@819 -- # '[' -z 3037034 ']' 00:16:10.331 06:56:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.331 06:56:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:10.331 06:56:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.331 06:56:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:10.331 06:56:17 -- common/autotest_common.sh@10 -- # set +x 00:16:10.331 [2024-05-12 06:56:17.276464] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:10.331 [2024-05-12 06:56:17.276557] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.331 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.331 [2024-05-12 06:56:17.341869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.331 [2024-05-12 06:56:17.451079] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:10.331 [2024-05-12 06:56:17.451235] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.331 [2024-05-12 06:56:17.451252] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.331 [2024-05-12 06:56:17.451264] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.331 [2024-05-12 06:56:17.451372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:10.331 [2024-05-12 06:56:17.451424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:10.331 [2024-05-12 06:56:17.451491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:10.331 [2024-05-12 06:56:17.451494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.295 06:56:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:11.295 06:56:18 -- common/autotest_common.sh@852 -- # return 0 00:16:11.295 06:56:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:11.295 06:56:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:11.295 06:56:18 -- common/autotest_common.sh@10 -- # set +x 00:16:11.295 06:56:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.295 06:56:18 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:11.295 06:56:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.295 06:56:18 -- common/autotest_common.sh@10 -- # set +x 00:16:11.295 [2024-05-12 06:56:18.285376] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.295 06:56:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.296 06:56:18 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:11.296 06:56:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.296 06:56:18 -- common/autotest_common.sh@10 -- # set +x 00:16:11.296 Malloc0 00:16:11.296 06:56:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.296 06:56:18 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:11.296 06:56:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.296 06:56:18 -- common/autotest_common.sh@10 -- # set +x 00:16:11.296 06:56:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.296 06:56:18 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:11.296 06:56:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.296 06:56:18 -- common/autotest_common.sh@10 -- # set +x 00:16:11.296 06:56:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.296 06:56:18 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.296 06:56:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.296 06:56:18 -- common/autotest_common.sh@10 -- # set +x 00:16:11.296 [2024-05-12 06:56:18.339056] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.296 06:56:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.296 06:56:18 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:11.296 06:56:18 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:11.296 06:56:18 -- nvmf/common.sh@520 -- # config=() 00:16:11.296 06:56:18 -- nvmf/common.sh@520 -- # local subsystem config 00:16:11.296 06:56:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:11.296 06:56:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:11.296 { 00:16:11.296 "params": { 00:16:11.296 "name": "Nvme$subsystem", 00:16:11.296 "trtype": "$TEST_TRANSPORT", 00:16:11.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:11.296 "adrfam": "ipv4", 00:16:11.296 "trsvcid": "$NVMF_PORT", 00:16:11.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:11.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:11.296 "hdgst": ${hdgst:-false}, 00:16:11.296 "ddgst": ${ddgst:-false} 00:16:11.296 }, 00:16:11.296 "method": "bdev_nvme_attach_controller" 00:16:11.296 } 00:16:11.296 EOF 00:16:11.296 )") 00:16:11.296 06:56:18 -- nvmf/common.sh@542 -- # cat 00:16:11.296 06:56:18 -- nvmf/common.sh@544 -- # jq . 00:16:11.296 06:56:18 -- nvmf/common.sh@545 -- # IFS=, 00:16:11.296 06:56:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:11.296 "params": { 00:16:11.296 "name": "Nvme1", 00:16:11.296 "trtype": "tcp", 00:16:11.296 "traddr": "10.0.0.2", 00:16:11.296 "adrfam": "ipv4", 00:16:11.296 "trsvcid": "4420", 00:16:11.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:11.296 "hdgst": false, 00:16:11.296 "ddgst": false 00:16:11.296 }, 00:16:11.296 "method": "bdev_nvme_attach_controller" 00:16:11.296 }' 00:16:11.296 [2024-05-12 06:56:18.382006] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:11.296 [2024-05-12 06:56:18.382070] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3037193 ] 00:16:11.296 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.554 [2024-05-12 06:56:18.441946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:11.554 [2024-05-12 06:56:18.552840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.554 [2024-05-12 06:56:18.552891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.554 [2024-05-12 06:56:18.552895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.812 [2024-05-12 06:56:18.853588] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:11.812 [2024-05-12 06:56:18.853638] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:11.812 I/O targets: 00:16:11.812 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:11.812 00:16:11.812 00:16:11.812 CUnit - A unit testing framework for C - Version 2.1-3 00:16:11.812 http://cunit.sourceforge.net/ 00:16:11.812 00:16:11.812 00:16:11.812 Suite: bdevio tests on: Nvme1n1 00:16:11.812 Test: blockdev write read block ...passed 00:16:12.070 Test: blockdev write zeroes read block ...passed 00:16:12.070 Test: blockdev write zeroes read no split ...passed 00:16:12.070 Test: blockdev write zeroes read split ...passed 00:16:12.070 Test: blockdev write zeroes read split partial ...passed 00:16:12.070 Test: blockdev reset ...[2024-05-12 06:56:18.985408] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:12.070 [2024-05-12 06:56:18.985506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151f180 (9): Bad file descriptor 00:16:12.070 [2024-05-12 06:56:19.041300] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:12.070 passed 00:16:12.070 Test: blockdev write read 8 blocks ...passed 00:16:12.070 Test: blockdev write read size > 128k ...passed 00:16:12.070 Test: blockdev write read invalid size ...passed 00:16:12.070 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:12.070 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:12.070 Test: blockdev write read max offset ...passed 00:16:12.328 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:12.328 Test: blockdev writev readv 8 blocks ...passed 00:16:12.328 Test: blockdev writev readv 30 x 1block ...passed 00:16:12.328 Test: blockdev writev readv block ...passed 00:16:12.328 Test: blockdev writev readv size > 128k ...passed 00:16:12.328 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:12.328 Test: blockdev comparev and writev ...[2024-05-12 06:56:19.299295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.328 [2024-05-12 06:56:19.299331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:12.328 [2024-05-12 06:56:19.299355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.328 [2024-05-12 06:56:19.299372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.328 [2024-05-12 06:56:19.299797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.328 [2024-05-12 06:56:19.299823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:12.328 [2024-05-12 06:56:19.299845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.328 [2024-05-12 06:56:19.299862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:12.328 [2024-05-12 06:56:19.300281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.328 [2024-05-12 06:56:19.300305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:12.328 [2024-05-12 06:56:19.300326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.328 [2024-05-12 06:56:19.300342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:12.328 [2024-05-12 06:56:19.300707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.328 [2024-05-12 06:56:19.300732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:12.328 [2024-05-12 06:56:19.300753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.328 [2024-05-12 06:56:19.300769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:12.328 passed 00:16:12.328 Test: blockdev nvme passthru rw ...passed 00:16:12.328 Test: blockdev nvme passthru vendor specific ...[2024-05-12 06:56:19.384095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:12.328 [2024-05-12 06:56:19.384123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:12.328 [2024-05-12 06:56:19.384332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:12.328 [2024-05-12 06:56:19.384357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:12.328 [2024-05-12 06:56:19.384564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:12.328 [2024-05-12 06:56:19.384587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:12.328 [2024-05-12 06:56:19.384789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:12.328 [2024-05-12 06:56:19.384813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:12.328 passed 00:16:12.328 Test: blockdev nvme admin passthru ...passed 00:16:12.328 Test: blockdev copy ...passed 00:16:12.328 00:16:12.328 Run Summary: Type Total Ran Passed Failed Inactive 00:16:12.328 suites 1 1 n/a 0 0 00:16:12.328 tests 23 23 23 0 0 00:16:12.328 asserts 152 152 152 0 n/a 00:16:12.328 00:16:12.328 Elapsed time = 1.164 seconds 00:16:12.586 06:56:19 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.586 06:56:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.586 06:56:19 -- common/autotest_common.sh@10 -- # set +x 00:16:12.586 06:56:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.586 06:56:19 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:12.586 06:56:19 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:12.586 06:56:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:12.586 06:56:19 -- nvmf/common.sh@116 -- # sync 00:16:12.586 06:56:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:12.586 06:56:19 -- nvmf/common.sh@119 -- # set +e 00:16:12.586 06:56:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:12.586 06:56:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:12.586 rmmod nvme_tcp 00:16:12.586 rmmod nvme_fabrics 00:16:12.844 rmmod nvme_keyring 00:16:12.844 06:56:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:12.844 06:56:19 -- nvmf/common.sh@123 -- # set -e 00:16:12.844 06:56:19 -- nvmf/common.sh@124 -- # return 0 00:16:12.844 06:56:19 -- nvmf/common.sh@477 -- # '[' -n 3037034 ']' 00:16:12.844 06:56:19 -- nvmf/common.sh@478 -- # killprocess 3037034 00:16:12.844 06:56:19 -- common/autotest_common.sh@926 -- # '[' -z 3037034 ']' 00:16:12.844 06:56:19 -- common/autotest_common.sh@930 -- # kill -0 3037034 00:16:12.844 06:56:19 -- common/autotest_common.sh@931 -- # uname 00:16:12.844 06:56:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:12.844 06:56:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3037034 00:16:12.844 06:56:19 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:12.844 06:56:19 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:12.844 06:56:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3037034' 00:16:12.844 killing process with pid 3037034 00:16:12.844 06:56:19 -- common/autotest_common.sh@945 -- # kill 3037034 00:16:12.844 06:56:19 -- common/autotest_common.sh@950 -- # wait 3037034 00:16:13.102 06:56:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:13.102 06:56:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:13.102 06:56:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:13.102 06:56:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.102 06:56:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:13.102 06:56:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.102 06:56:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.102 06:56:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.006 06:56:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:15.006 00:16:15.006 real 0m7.136s 00:16:15.006 user 0m13.709s 00:16:15.006 sys 0m2.129s 00:16:15.006 06:56:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.006 06:56:22 -- common/autotest_common.sh@10 -- # set +x 00:16:15.006 ************************************ 00:16:15.006 END TEST nvmf_bdevio 00:16:15.006 ************************************ 00:16:15.265 06:56:22 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:15.265 06:56:22 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:15.265 06:56:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:15.265 06:56:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:15.265 06:56:22 -- common/autotest_common.sh@10 -- # set +x 00:16:15.265 ************************************ 00:16:15.265 START TEST nvmf_bdevio_no_huge 00:16:15.265 ************************************ 00:16:15.265 06:56:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:15.265 * Looking for test storage... 00:16:15.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.265 06:56:22 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.265 06:56:22 -- nvmf/common.sh@7 -- # uname -s 00:16:15.265 06:56:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.265 06:56:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.265 06:56:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.265 06:56:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.265 06:56:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.265 06:56:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.265 06:56:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.265 06:56:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.265 06:56:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.265 06:56:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.265 06:56:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.265 06:56:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.265 06:56:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.265 06:56:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.265 06:56:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.265 06:56:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.265 06:56:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.265 06:56:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.266 06:56:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.266 06:56:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.266 06:56:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.266 06:56:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.266 06:56:22 -- paths/export.sh@5 -- # export PATH 00:16:15.266 06:56:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.266 06:56:22 -- nvmf/common.sh@46 -- # : 0 00:16:15.266 06:56:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:15.266 06:56:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:15.266 06:56:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:15.266 06:56:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.266 06:56:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.266 06:56:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:15.266 06:56:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:15.266 06:56:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:15.266 06:56:22 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:15.266 06:56:22 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:15.266 06:56:22 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:15.266 06:56:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:15.266 06:56:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.266 06:56:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:15.266 06:56:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:15.266 06:56:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:15.266 06:56:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.266 06:56:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.266 06:56:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.266 06:56:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:15.266 06:56:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:15.266 06:56:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:15.266 06:56:22 -- common/autotest_common.sh@10 -- # set +x 00:16:17.166 06:56:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:17.166 06:56:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:17.166 06:56:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:17.166 06:56:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:17.166 06:56:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:17.166 06:56:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:17.166 06:56:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:17.166 06:56:24 -- nvmf/common.sh@294 -- # net_devs=() 00:16:17.166 06:56:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:17.166 06:56:24 -- nvmf/common.sh@295 -- # e810=() 00:16:17.166 06:56:24 -- nvmf/common.sh@295 -- # local -ga e810 00:16:17.166 06:56:24 -- nvmf/common.sh@296 -- # x722=() 00:16:17.166 06:56:24 -- nvmf/common.sh@296 -- # local -ga x722 00:16:17.166 06:56:24 -- nvmf/common.sh@297 -- # mlx=() 00:16:17.166 06:56:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:17.166 06:56:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.166 06:56:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.166 06:56:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.166 06:56:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.166 06:56:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.166 06:56:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.166 06:56:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.166 06:56:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.166 06:56:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.166 06:56:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.166 06:56:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.166 06:56:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:17.166 06:56:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:17.166 06:56:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:17.166 06:56:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:17.166 06:56:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:17.166 06:56:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:17.166 06:56:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:17.166 06:56:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:17.166 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:17.166 06:56:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:17.166 06:56:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:17.166 06:56:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.166 06:56:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.166 06:56:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:17.166 06:56:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:17.167 06:56:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:17.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:17.167 06:56:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:17.167 06:56:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:17.167 06:56:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.167 06:56:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.167 06:56:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:17.167 06:56:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:17.167 06:56:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:17.167 06:56:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:17.167 06:56:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:17.167 06:56:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.167 06:56:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:17.167 06:56:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.167 06:56:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:17.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:17.167 06:56:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.167 06:56:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:17.167 06:56:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.167 06:56:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:17.167 06:56:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.167 06:56:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:17.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:17.167 06:56:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.167 06:56:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:17.167 06:56:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:17.167 06:56:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:17.167 06:56:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:17.167 06:56:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:17.167 06:56:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.167 06:56:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.167 06:56:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.167 06:56:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:17.167 06:56:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.167 06:56:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.167 06:56:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:17.167 06:56:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.167 06:56:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.167 06:56:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:17.167 06:56:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:17.167 06:56:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.167 06:56:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.426 06:56:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.426 06:56:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.426 06:56:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:17.426 06:56:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.426 06:56:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.426 06:56:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.426 06:56:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:17.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:16:17.426 00:16:17.426 --- 10.0.0.2 ping statistics --- 00:16:17.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.426 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:16:17.426 06:56:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:17.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:16:17.426 00:16:17.426 --- 10.0.0.1 ping statistics --- 00:16:17.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.426 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:16:17.426 06:56:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.426 06:56:24 -- nvmf/common.sh@410 -- # return 0 00:16:17.426 06:56:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:17.426 06:56:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.426 06:56:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:17.426 06:56:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:17.426 06:56:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.426 06:56:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:17.426 06:56:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:17.426 06:56:24 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:17.426 06:56:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:17.426 06:56:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:17.426 06:56:24 -- common/autotest_common.sh@10 -- # set +x 00:16:17.426 06:56:24 -- nvmf/common.sh@469 -- # nvmfpid=3039285 00:16:17.426 06:56:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:17.426 06:56:24 -- nvmf/common.sh@470 -- # waitforlisten 3039285 00:16:17.426 06:56:24 -- common/autotest_common.sh@819 -- # '[' -z 3039285 ']' 00:16:17.426 06:56:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.426 06:56:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:17.426 06:56:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.426 06:56:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:17.426 06:56:24 -- common/autotest_common.sh@10 -- # set +x 00:16:17.426 [2024-05-12 06:56:24.444272] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:17.426 [2024-05-12 06:56:24.444363] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:17.426 [2024-05-12 06:56:24.516649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.685 [2024-05-12 06:56:24.623219] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.685 [2024-05-12 06:56:24.623376] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.685 [2024-05-12 06:56:24.623393] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.685 [2024-05-12 06:56:24.623405] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.685 [2024-05-12 06:56:24.623496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:17.685 [2024-05-12 06:56:24.623642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:17.685 [2024-05-12 06:56:24.623713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:17.685 [2024-05-12 06:56:24.623716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.619 06:56:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:18.619 06:56:25 -- common/autotest_common.sh@852 -- # return 0 00:16:18.619 06:56:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:18.619 06:56:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:18.619 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:18.619 06:56:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.619 06:56:25 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.619 06:56:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.619 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:18.619 [2024-05-12 06:56:25.405453] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.619 06:56:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.619 06:56:25 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:18.619 06:56:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.619 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:18.619 Malloc0 00:16:18.619 06:56:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.619 06:56:25 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:18.619 06:56:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.619 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:18.619 06:56:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.619 06:56:25 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:18.619 06:56:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.619 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:18.619 06:56:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.619 06:56:25 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.619 06:56:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.619 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:18.619 [2024-05-12 06:56:25.443198] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.619 06:56:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.619 06:56:25 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:18.619 06:56:25 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:18.619 06:56:25 -- nvmf/common.sh@520 -- # config=() 00:16:18.619 06:56:25 -- nvmf/common.sh@520 -- # local subsystem config 00:16:18.619 06:56:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:18.619 06:56:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:18.619 { 00:16:18.619 "params": { 00:16:18.619 "name": "Nvme$subsystem", 00:16:18.619 "trtype": "$TEST_TRANSPORT", 00:16:18.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.619 "adrfam": "ipv4", 00:16:18.619 "trsvcid": "$NVMF_PORT", 00:16:18.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.619 "hdgst": ${hdgst:-false}, 00:16:18.619 "ddgst": ${ddgst:-false} 00:16:18.619 }, 00:16:18.619 "method": "bdev_nvme_attach_controller" 00:16:18.619 } 00:16:18.619 EOF 00:16:18.619 )") 00:16:18.619 06:56:25 -- nvmf/common.sh@542 -- # cat 00:16:18.619 06:56:25 -- nvmf/common.sh@544 -- # jq . 00:16:18.619 06:56:25 -- nvmf/common.sh@545 -- # IFS=, 00:16:18.619 06:56:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:18.619 "params": { 00:16:18.619 "name": "Nvme1", 00:16:18.619 "trtype": "tcp", 00:16:18.619 "traddr": "10.0.0.2", 00:16:18.619 "adrfam": "ipv4", 00:16:18.619 "trsvcid": "4420", 00:16:18.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.619 "hdgst": false, 00:16:18.619 "ddgst": false 00:16:18.619 }, 00:16:18.619 "method": "bdev_nvme_attach_controller" 00:16:18.619 }' 00:16:18.619 [2024-05-12 06:56:25.484367] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:18.619 [2024-05-12 06:56:25.484445] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3039444 ] 00:16:18.619 [2024-05-12 06:56:25.547273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:18.619 [2024-05-12 06:56:25.660782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.619 [2024-05-12 06:56:25.660836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.619 [2024-05-12 06:56:25.660839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.877 [2024-05-12 06:56:25.980855] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:18.877 [2024-05-12 06:56:25.980904] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:18.877 I/O targets: 00:16:18.877 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:18.877 00:16:18.877 00:16:18.877 CUnit - A unit testing framework for C - Version 2.1-3 00:16:18.877 http://cunit.sourceforge.net/ 00:16:18.877 00:16:18.877 00:16:18.877 Suite: bdevio tests on: Nvme1n1 00:16:19.135 Test: blockdev write read block ...passed 00:16:19.135 Test: blockdev write zeroes read block ...passed 00:16:19.135 Test: blockdev write zeroes read no split ...passed 00:16:19.135 Test: blockdev write zeroes read split ...passed 00:16:19.135 Test: blockdev write zeroes read split partial ...passed 00:16:19.135 Test: blockdev reset ...[2024-05-12 06:56:26.196139] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:19.135 [2024-05-12 06:56:26.196246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afeb00 (9): Bad file descriptor 00:16:19.135 [2024-05-12 06:56:26.258613] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:19.135 passed 00:16:19.393 Test: blockdev write read 8 blocks ...passed 00:16:19.393 Test: blockdev write read size > 128k ...passed 00:16:19.393 Test: blockdev write read invalid size ...passed 00:16:19.393 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:19.393 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:19.393 Test: blockdev write read max offset ...passed 00:16:19.393 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:19.393 Test: blockdev writev readv 8 blocks ...passed 00:16:19.393 Test: blockdev writev readv 30 x 1block ...passed 00:16:19.651 Test: blockdev writev readv block ...passed 00:16:19.651 Test: blockdev writev readv size > 128k ...passed 00:16:19.651 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:19.651 Test: blockdev comparev and writev ...[2024-05-12 06:56:26.557281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.651 [2024-05-12 06:56:26.557317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.651 [2024-05-12 06:56:26.557341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.651 [2024-05-12 06:56:26.557357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.651 [2024-05-12 06:56:26.557784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.651 [2024-05-12 06:56:26.557811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:19.651 [2024-05-12 06:56:26.557832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.651 [2024-05-12 06:56:26.557851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:19.651 [2024-05-12 06:56:26.558244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.652 [2024-05-12 06:56:26.558267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:19.652 [2024-05-12 06:56:26.558288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.652 [2024-05-12 06:56:26.558303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:19.652 [2024-05-12 06:56:26.558706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.652 [2024-05-12 06:56:26.558730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:19.652 [2024-05-12 06:56:26.558751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.652 [2024-05-12 06:56:26.558766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:19.652 passed 00:16:19.652 Test: blockdev nvme passthru rw ...passed 00:16:19.652 Test: blockdev nvme passthru vendor specific ...[2024-05-12 06:56:26.642099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.652 [2024-05-12 06:56:26.642125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:19.652 [2024-05-12 06:56:26.642339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.652 [2024-05-12 06:56:26.642363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:19.652 [2024-05-12 06:56:26.642568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.652 [2024-05-12 06:56:26.642599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:19.652 [2024-05-12 06:56:26.642814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.652 [2024-05-12 06:56:26.642837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:19.652 passed 00:16:19.652 Test: blockdev nvme admin passthru ...passed 00:16:19.652 Test: blockdev copy ...passed 00:16:19.652 00:16:19.652 Run Summary: Type Total Ran Passed Failed Inactive 00:16:19.652 suites 1 1 n/a 0 0 00:16:19.652 tests 23 23 23 0 0 00:16:19.652 asserts 152 152 152 0 n/a 00:16:19.652 00:16:19.652 Elapsed time = 1.433 seconds 00:16:20.219 06:56:27 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.219 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:20.219 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:16:20.219 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:20.219 06:56:27 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:20.219 06:56:27 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:20.219 06:56:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:20.219 06:56:27 -- nvmf/common.sh@116 -- # sync 00:16:20.219 06:56:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:20.219 06:56:27 -- nvmf/common.sh@119 -- # set +e 00:16:20.219 06:56:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:20.219 06:56:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:20.219 rmmod nvme_tcp 00:16:20.219 rmmod nvme_fabrics 00:16:20.219 rmmod nvme_keyring 00:16:20.219 06:56:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:20.219 06:56:27 -- nvmf/common.sh@123 -- # set -e 00:16:20.219 06:56:27 -- nvmf/common.sh@124 -- # return 0 00:16:20.219 06:56:27 -- nvmf/common.sh@477 -- # '[' -n 3039285 ']' 00:16:20.219 06:56:27 -- nvmf/common.sh@478 -- # killprocess 3039285 00:16:20.219 06:56:27 -- common/autotest_common.sh@926 -- # '[' -z 3039285 ']' 00:16:20.219 06:56:27 -- common/autotest_common.sh@930 -- # kill -0 3039285 00:16:20.219 06:56:27 -- common/autotest_common.sh@931 -- # uname 00:16:20.219 06:56:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:20.219 06:56:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3039285 00:16:20.219 06:56:27 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:20.219 06:56:27 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:20.219 06:56:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3039285' 00:16:20.219 killing process with pid 3039285 00:16:20.219 06:56:27 -- common/autotest_common.sh@945 -- # kill 3039285 00:16:20.219 06:56:27 -- common/autotest_common.sh@950 -- # wait 3039285 00:16:20.477 06:56:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:20.477 06:56:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:20.477 06:56:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:20.477 06:56:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.477 06:56:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:20.477 06:56:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.477 06:56:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.477 06:56:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.017 06:56:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:23.017 00:16:23.017 real 0m7.486s 00:16:23.017 user 0m15.074s 00:16:23.017 sys 0m2.563s 00:16:23.017 06:56:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.017 06:56:29 -- common/autotest_common.sh@10 -- # set +x 00:16:23.017 ************************************ 00:16:23.017 END TEST nvmf_bdevio_no_huge 00:16:23.017 ************************************ 00:16:23.017 06:56:29 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:23.017 06:56:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:23.017 06:56:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:23.017 06:56:29 -- common/autotest_common.sh@10 -- # set +x 00:16:23.017 ************************************ 00:16:23.017 START TEST nvmf_tls 00:16:23.017 ************************************ 00:16:23.017 06:56:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:23.017 * Looking for test storage... 00:16:23.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.017 06:56:29 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.017 06:56:29 -- nvmf/common.sh@7 -- # uname -s 00:16:23.017 06:56:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.017 06:56:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.017 06:56:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.017 06:56:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.017 06:56:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.017 06:56:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.017 06:56:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.017 06:56:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.017 06:56:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.017 06:56:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.017 06:56:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.017 06:56:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.017 06:56:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.017 06:56:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.017 06:56:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.017 06:56:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.017 06:56:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.017 06:56:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.017 06:56:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.017 06:56:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.017 06:56:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.017 06:56:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.017 06:56:29 -- paths/export.sh@5 -- # export PATH 00:16:23.017 06:56:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.017 06:56:29 -- nvmf/common.sh@46 -- # : 0 00:16:23.017 06:56:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:23.017 06:56:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:23.017 06:56:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:23.017 06:56:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.017 06:56:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.017 06:56:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:23.017 06:56:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:23.017 06:56:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:23.017 06:56:29 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:23.017 06:56:29 -- target/tls.sh@71 -- # nvmftestinit 00:16:23.017 06:56:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:23.017 06:56:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.017 06:56:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:23.017 06:56:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:23.017 06:56:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:23.017 06:56:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.017 06:56:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.017 06:56:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.017 06:56:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:23.017 06:56:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:23.017 06:56:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:23.017 06:56:29 -- common/autotest_common.sh@10 -- # set +x 00:16:24.918 06:56:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:24.918 06:56:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:24.918 06:56:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:24.918 06:56:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:24.918 06:56:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:24.918 06:56:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:24.918 06:56:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:24.918 06:56:31 -- nvmf/common.sh@294 -- # net_devs=() 00:16:24.918 06:56:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:24.918 06:56:31 -- nvmf/common.sh@295 -- # e810=() 00:16:24.918 06:56:31 -- nvmf/common.sh@295 -- # local -ga e810 00:16:24.918 06:56:31 -- nvmf/common.sh@296 -- # x722=() 00:16:24.918 06:56:31 -- nvmf/common.sh@296 -- # local -ga x722 00:16:24.918 06:56:31 -- nvmf/common.sh@297 -- # mlx=() 00:16:24.918 06:56:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:24.918 06:56:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.918 06:56:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.918 06:56:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.918 06:56:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.918 06:56:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.918 06:56:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.918 06:56:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.918 06:56:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.918 06:56:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.918 06:56:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.918 06:56:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.918 06:56:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:24.918 06:56:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:24.918 06:56:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:24.918 06:56:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:24.918 06:56:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:24.918 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:24.918 06:56:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:24.918 06:56:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:24.918 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:24.918 06:56:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:24.918 06:56:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:24.918 06:56:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.918 06:56:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:24.918 06:56:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.918 06:56:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:24.918 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:24.918 06:56:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.918 06:56:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:24.918 06:56:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.918 06:56:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:24.918 06:56:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.918 06:56:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:24.918 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:24.918 06:56:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.918 06:56:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:24.918 06:56:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:24.918 06:56:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:24.918 06:56:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.918 06:56:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.918 06:56:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.918 06:56:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:24.918 06:56:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.918 06:56:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.918 06:56:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:24.918 06:56:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.918 06:56:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.918 06:56:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:24.918 06:56:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:24.918 06:56:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.918 06:56:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.918 06:56:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.918 06:56:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.918 06:56:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:24.918 06:56:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.918 06:56:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.918 06:56:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.918 06:56:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:24.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:16:24.918 00:16:24.918 --- 10.0.0.2 ping statistics --- 00:16:24.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.918 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:24.918 06:56:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:16:24.918 00:16:24.918 --- 10.0.0.1 ping statistics --- 00:16:24.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.918 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:16:24.918 06:56:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.918 06:56:31 -- nvmf/common.sh@410 -- # return 0 00:16:24.918 06:56:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:24.918 06:56:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.918 06:56:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:24.918 06:56:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.918 06:56:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:24.918 06:56:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:24.918 06:56:31 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:24.918 06:56:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:24.918 06:56:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:24.918 06:56:31 -- common/autotest_common.sh@10 -- # set +x 00:16:24.918 06:56:31 -- nvmf/common.sh@469 -- # nvmfpid=3041651 00:16:24.918 06:56:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:24.918 06:56:31 -- nvmf/common.sh@470 -- # waitforlisten 3041651 00:16:24.918 06:56:31 -- common/autotest_common.sh@819 -- # '[' -z 3041651 ']' 00:16:24.918 06:56:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.918 06:56:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:24.918 06:56:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.918 06:56:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:24.918 06:56:31 -- common/autotest_common.sh@10 -- # set +x 00:16:24.918 [2024-05-12 06:56:31.877552] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:24.918 [2024-05-12 06:56:31.877626] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.918 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.918 [2024-05-12 06:56:31.942779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.176 [2024-05-12 06:56:32.053529] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:25.176 [2024-05-12 06:56:32.053673] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.176 [2024-05-12 06:56:32.053714] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.176 [2024-05-12 06:56:32.053728] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.176 [2024-05-12 06:56:32.053763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.176 06:56:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:25.176 06:56:32 -- common/autotest_common.sh@852 -- # return 0 00:16:25.176 06:56:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:25.176 06:56:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:25.176 06:56:32 -- common/autotest_common.sh@10 -- # set +x 00:16:25.176 06:56:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.176 06:56:32 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:25.176 06:56:32 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:25.434 true 00:16:25.434 06:56:32 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:25.434 06:56:32 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:25.692 06:56:32 -- target/tls.sh@82 -- # version=0 00:16:25.692 06:56:32 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:25.692 06:56:32 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:25.949 06:56:32 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:25.949 06:56:32 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:26.207 06:56:33 -- target/tls.sh@90 -- # version=13 00:16:26.207 06:56:33 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:26.207 06:56:33 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:26.465 06:56:33 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:26.465 06:56:33 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:26.725 06:56:33 -- target/tls.sh@98 -- # version=7 00:16:26.725 06:56:33 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:26.725 06:56:33 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:26.725 06:56:33 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:27.013 06:56:33 -- target/tls.sh@105 -- # ktls=false 00:16:27.013 06:56:33 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:27.013 06:56:33 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:27.271 06:56:34 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:27.271 06:56:34 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:27.529 06:56:34 -- target/tls.sh@113 -- # ktls=true 00:16:27.529 06:56:34 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:27.529 06:56:34 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:27.788 06:56:34 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:27.788 06:56:34 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:28.046 06:56:34 -- target/tls.sh@121 -- # ktls=false 00:16:28.046 06:56:34 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:28.046 06:56:34 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:28.046 06:56:34 -- target/tls.sh@49 -- # local key hash crc 00:16:28.046 06:56:34 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:28.046 06:56:34 -- target/tls.sh@51 -- # hash=01 00:16:28.046 06:56:34 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:28.046 06:56:34 -- target/tls.sh@52 -- # gzip -1 -c 00:16:28.046 06:56:34 -- target/tls.sh@52 -- # tail -c8 00:16:28.046 06:56:34 -- target/tls.sh@52 -- # head -c 4 00:16:28.046 06:56:34 -- target/tls.sh@52 -- # crc='p$H�' 00:16:28.046 06:56:34 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:28.046 06:56:34 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:28.046 06:56:34 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:28.046 06:56:34 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:28.046 06:56:34 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:28.046 06:56:34 -- target/tls.sh@49 -- # local key hash crc 00:16:28.046 06:56:34 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:28.046 06:56:34 -- target/tls.sh@51 -- # hash=01 00:16:28.046 06:56:34 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:28.046 06:56:34 -- target/tls.sh@52 -- # gzip -1 -c 00:16:28.047 06:56:34 -- target/tls.sh@52 -- # tail -c8 00:16:28.047 06:56:34 -- target/tls.sh@52 -- # head -c 4 00:16:28.047 06:56:34 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:28.047 06:56:34 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:28.047 06:56:34 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:28.047 06:56:34 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:28.047 06:56:34 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:28.047 06:56:34 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:28.047 06:56:34 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:28.047 06:56:34 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:28.047 06:56:34 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:28.047 06:56:34 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:28.047 06:56:34 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:28.047 06:56:34 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:28.304 06:56:35 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:28.562 06:56:35 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:28.563 06:56:35 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:28.563 06:56:35 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:28.820 [2024-05-12 06:56:35.834889] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.820 06:56:35 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:29.078 06:56:36 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:29.335 [2024-05-12 06:56:36.320184] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:29.335 [2024-05-12 06:56:36.320394] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.335 06:56:36 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:29.593 malloc0 00:16:29.593 06:56:36 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:29.850 06:56:36 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:30.108 06:56:37 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:30.108 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.076 Initializing NVMe Controllers 00:16:40.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:40.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:40.076 Initialization complete. Launching workers. 00:16:40.076 ======================================================== 00:16:40.076 Latency(us) 00:16:40.076 Device Information : IOPS MiB/s Average min max 00:16:40.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7778.00 30.38 8230.93 1334.47 9158.66 00:16:40.076 ======================================================== 00:16:40.076 Total : 7778.00 30.38 8230.93 1334.47 9158.66 00:16:40.076 00:16:40.076 06:56:47 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:40.076 06:56:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:40.076 06:56:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:40.076 06:56:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:40.076 06:56:47 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:16:40.076 06:56:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:40.076 06:56:47 -- target/tls.sh@28 -- # bdevperf_pid=3043492 00:16:40.076 06:56:47 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:40.076 06:56:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:40.076 06:56:47 -- target/tls.sh@31 -- # waitforlisten 3043492 /var/tmp/bdevperf.sock 00:16:40.076 06:56:47 -- common/autotest_common.sh@819 -- # '[' -z 3043492 ']' 00:16:40.076 06:56:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.076 06:56:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:40.076 06:56:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.076 06:56:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:40.076 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:16:40.076 [2024-05-12 06:56:47.180739] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:40.076 [2024-05-12 06:56:47.180832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043492 ] 00:16:40.334 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.334 [2024-05-12 06:56:47.240111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.334 [2024-05-12 06:56:47.342544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.270 06:56:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:41.270 06:56:48 -- common/autotest_common.sh@852 -- # return 0 00:16:41.270 06:56:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:41.270 [2024-05-12 06:56:48.328077] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:41.528 TLSTESTn1 00:16:41.528 06:56:48 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:41.528 Running I/O for 10 seconds... 00:16:51.486 00:16:51.486 Latency(us) 00:16:51.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.486 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:51.486 Verification LBA range: start 0x0 length 0x2000 00:16:51.486 TLSTESTn1 : 10.04 1963.77 7.67 0.00 0.00 65075.32 9854.67 74177.04 00:16:51.486 =================================================================================================================== 00:16:51.486 Total : 1963.77 7.67 0.00 0.00 65075.32 9854.67 74177.04 00:16:51.486 0 00:16:51.486 06:56:58 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:51.486 06:56:58 -- target/tls.sh@45 -- # killprocess 3043492 00:16:51.486 06:56:58 -- common/autotest_common.sh@926 -- # '[' -z 3043492 ']' 00:16:51.486 06:56:58 -- common/autotest_common.sh@930 -- # kill -0 3043492 00:16:51.486 06:56:58 -- common/autotest_common.sh@931 -- # uname 00:16:51.486 06:56:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:51.486 06:56:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3043492 00:16:51.745 06:56:58 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:51.745 06:56:58 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:51.745 06:56:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3043492' 00:16:51.745 killing process with pid 3043492 00:16:51.745 06:56:58 -- common/autotest_common.sh@945 -- # kill 3043492 00:16:51.745 Received shutdown signal, test time was about 10.000000 seconds 00:16:51.745 00:16:51.745 Latency(us) 00:16:51.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.745 =================================================================================================================== 00:16:51.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:51.745 06:56:58 -- common/autotest_common.sh@950 -- # wait 3043492 00:16:51.745 06:56:58 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:51.745 06:56:58 -- common/autotest_common.sh@640 -- # local es=0 00:16:51.745 06:56:58 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:51.745 06:56:58 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:51.745 06:56:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:51.745 06:56:58 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:51.745 06:56:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:51.745 06:56:58 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:51.745 06:56:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:51.745 06:56:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:51.745 06:56:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:51.745 06:56:58 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:16:51.745 06:56:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.745 06:56:58 -- target/tls.sh@28 -- # bdevperf_pid=3044868 00:16:51.745 06:56:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:51.745 06:56:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.745 06:56:58 -- target/tls.sh@31 -- # waitforlisten 3044868 /var/tmp/bdevperf.sock 00:16:51.745 06:56:58 -- common/autotest_common.sh@819 -- # '[' -z 3044868 ']' 00:16:51.745 06:56:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.745 06:56:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.745 06:56:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.745 06:56:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.745 06:56:58 -- common/autotest_common.sh@10 -- # set +x 00:16:52.004 [2024-05-12 06:56:58.908120] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:52.004 [2024-05-12 06:56:58.908204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044868 ] 00:16:52.004 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.004 [2024-05-12 06:56:58.968364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.004 [2024-05-12 06:56:59.074262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.938 06:56:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:52.938 06:56:59 -- common/autotest_common.sh@852 -- # return 0 00:16:52.938 06:56:59 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:53.197 [2024-05-12 06:57:00.115754] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:53.197 [2024-05-12 06:57:00.121496] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:53.197 [2024-05-12 06:57:00.121916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0f870 (107): Transport endpoint is not connected 00:16:53.197 [2024-05-12 06:57:00.122904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0f870 (9): Bad file descriptor 00:16:53.197 [2024-05-12 06:57:00.123903] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:53.197 [2024-05-12 06:57:00.123928] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:53.197 [2024-05-12 06:57:00.123946] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:53.197 request: 00:16:53.197 { 00:16:53.197 "name": "TLSTEST", 00:16:53.197 "trtype": "tcp", 00:16:53.197 "traddr": "10.0.0.2", 00:16:53.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:53.197 "adrfam": "ipv4", 00:16:53.197 "trsvcid": "4420", 00:16:53.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.197 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:16:53.197 "method": "bdev_nvme_attach_controller", 00:16:53.197 "req_id": 1 00:16:53.197 } 00:16:53.198 Got JSON-RPC error response 00:16:53.198 response: 00:16:53.198 { 00:16:53.198 "code": -32602, 00:16:53.198 "message": "Invalid parameters" 00:16:53.198 } 00:16:53.198 06:57:00 -- target/tls.sh@36 -- # killprocess 3044868 00:16:53.198 06:57:00 -- common/autotest_common.sh@926 -- # '[' -z 3044868 ']' 00:16:53.198 06:57:00 -- common/autotest_common.sh@930 -- # kill -0 3044868 00:16:53.198 06:57:00 -- common/autotest_common.sh@931 -- # uname 00:16:53.198 06:57:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:53.198 06:57:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3044868 00:16:53.198 06:57:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:53.198 06:57:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:53.198 06:57:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3044868' 00:16:53.198 killing process with pid 3044868 00:16:53.198 06:57:00 -- common/autotest_common.sh@945 -- # kill 3044868 00:16:53.198 Received shutdown signal, test time was about 10.000000 seconds 00:16:53.198 00:16:53.198 Latency(us) 00:16:53.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.198 =================================================================================================================== 00:16:53.198 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:53.198 06:57:00 -- common/autotest_common.sh@950 -- # wait 3044868 00:16:53.458 06:57:00 -- target/tls.sh@37 -- # return 1 00:16:53.458 06:57:00 -- common/autotest_common.sh@643 -- # es=1 00:16:53.458 06:57:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:53.458 06:57:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:53.458 06:57:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:53.458 06:57:00 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:53.458 06:57:00 -- common/autotest_common.sh@640 -- # local es=0 00:16:53.458 06:57:00 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:53.458 06:57:00 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:53.458 06:57:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.458 06:57:00 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:53.458 06:57:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.458 06:57:00 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:53.458 06:57:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:53.458 06:57:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:53.458 06:57:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:53.458 06:57:00 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:16:53.458 06:57:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:53.458 06:57:00 -- target/tls.sh@28 -- # bdevperf_pid=3045180 00:16:53.458 06:57:00 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:53.458 06:57:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.458 06:57:00 -- target/tls.sh@31 -- # waitforlisten 3045180 /var/tmp/bdevperf.sock 00:16:53.458 06:57:00 -- common/autotest_common.sh@819 -- # '[' -z 3045180 ']' 00:16:53.458 06:57:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.458 06:57:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:53.458 06:57:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.458 06:57:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:53.458 06:57:00 -- common/autotest_common.sh@10 -- # set +x 00:16:53.458 [2024-05-12 06:57:00.455691] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:53.458 [2024-05-12 06:57:00.455813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045180 ] 00:16:53.458 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.458 [2024-05-12 06:57:00.517303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.773 [2024-05-12 06:57:00.627768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.338 06:57:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:54.338 06:57:01 -- common/autotest_common.sh@852 -- # return 0 00:16:54.338 06:57:01 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:54.596 [2024-05-12 06:57:01.651791] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:54.596 [2024-05-12 06:57:01.662786] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:54.596 [2024-05-12 06:57:01.662819] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:54.596 [2024-05-12 06:57:01.662872] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:54.596 [2024-05-12 06:57:01.663109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2473870 (107): Transport endpoint is not connected 00:16:54.596 [2024-05-12 06:57:01.664100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2473870 (9): Bad file descriptor 00:16:54.596 [2024-05-12 06:57:01.665105] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:54.596 [2024-05-12 06:57:01.665125] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:54.596 [2024-05-12 06:57:01.665151] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:54.596 request: 00:16:54.596 { 00:16:54.596 "name": "TLSTEST", 00:16:54.596 "trtype": "tcp", 00:16:54.596 "traddr": "10.0.0.2", 00:16:54.596 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:54.596 "adrfam": "ipv4", 00:16:54.596 "trsvcid": "4420", 00:16:54.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.596 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:16:54.596 "method": "bdev_nvme_attach_controller", 00:16:54.596 "req_id": 1 00:16:54.596 } 00:16:54.596 Got JSON-RPC error response 00:16:54.596 response: 00:16:54.596 { 00:16:54.596 "code": -32602, 00:16:54.596 "message": "Invalid parameters" 00:16:54.596 } 00:16:54.596 06:57:01 -- target/tls.sh@36 -- # killprocess 3045180 00:16:54.596 06:57:01 -- common/autotest_common.sh@926 -- # '[' -z 3045180 ']' 00:16:54.596 06:57:01 -- common/autotest_common.sh@930 -- # kill -0 3045180 00:16:54.596 06:57:01 -- common/autotest_common.sh@931 -- # uname 00:16:54.596 06:57:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:54.596 06:57:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3045180 00:16:54.596 06:57:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:54.596 06:57:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:54.596 06:57:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3045180' 00:16:54.596 killing process with pid 3045180 00:16:54.596 06:57:01 -- common/autotest_common.sh@945 -- # kill 3045180 00:16:54.596 Received shutdown signal, test time was about 10.000000 seconds 00:16:54.596 00:16:54.596 Latency(us) 00:16:54.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.596 =================================================================================================================== 00:16:54.596 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:54.596 06:57:01 -- common/autotest_common.sh@950 -- # wait 3045180 00:16:54.854 06:57:01 -- target/tls.sh@37 -- # return 1 00:16:54.854 06:57:01 -- common/autotest_common.sh@643 -- # es=1 00:16:54.854 06:57:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:54.854 06:57:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:54.854 06:57:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:54.854 06:57:01 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:54.854 06:57:01 -- common/autotest_common.sh@640 -- # local es=0 00:16:54.854 06:57:01 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:54.854 06:57:01 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:54.854 06:57:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:54.854 06:57:01 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:54.854 06:57:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:54.854 06:57:01 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:54.854 06:57:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:54.854 06:57:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:54.854 06:57:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:54.854 06:57:01 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:16:54.854 06:57:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:54.854 06:57:01 -- target/tls.sh@28 -- # bdevperf_pid=3045403 00:16:54.854 06:57:01 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:54.854 06:57:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:54.854 06:57:01 -- target/tls.sh@31 -- # waitforlisten 3045403 /var/tmp/bdevperf.sock 00:16:54.854 06:57:01 -- common/autotest_common.sh@819 -- # '[' -z 3045403 ']' 00:16:54.854 06:57:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.854 06:57:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:54.854 06:57:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.854 06:57:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:54.854 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:16:55.112 [2024-05-12 06:57:01.991016] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:55.112 [2024-05-12 06:57:01.991111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045403 ] 00:16:55.112 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.112 [2024-05-12 06:57:02.048775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.112 [2024-05-12 06:57:02.149638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.041 06:57:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:56.041 06:57:02 -- common/autotest_common.sh@852 -- # return 0 00:16:56.041 06:57:02 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:56.041 [2024-05-12 06:57:03.163160] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:56.041 [2024-05-12 06:57:03.168412] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:56.041 [2024-05-12 06:57:03.168445] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:56.041 [2024-05-12 06:57:03.168493] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:56.041 [2024-05-12 06:57:03.169038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0870 (107): Transport endpoint is not connected 00:16:56.041 [2024-05-12 06:57:03.170026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be0870 (9): Bad file descriptor 00:16:56.298 [2024-05-12 06:57:03.171025] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:56.298 [2024-05-12 06:57:03.171060] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:56.298 [2024-05-12 06:57:03.171077] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:56.298 request: 00:16:56.299 { 00:16:56.299 "name": "TLSTEST", 00:16:56.299 "trtype": "tcp", 00:16:56.299 "traddr": "10.0.0.2", 00:16:56.299 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:56.299 "adrfam": "ipv4", 00:16:56.299 "trsvcid": "4420", 00:16:56.299 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:56.299 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:16:56.299 "method": "bdev_nvme_attach_controller", 00:16:56.299 "req_id": 1 00:16:56.299 } 00:16:56.299 Got JSON-RPC error response 00:16:56.299 response: 00:16:56.299 { 00:16:56.299 "code": -32602, 00:16:56.299 "message": "Invalid parameters" 00:16:56.299 } 00:16:56.299 06:57:03 -- target/tls.sh@36 -- # killprocess 3045403 00:16:56.299 06:57:03 -- common/autotest_common.sh@926 -- # '[' -z 3045403 ']' 00:16:56.299 06:57:03 -- common/autotest_common.sh@930 -- # kill -0 3045403 00:16:56.299 06:57:03 -- common/autotest_common.sh@931 -- # uname 00:16:56.299 06:57:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.299 06:57:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3045403 00:16:56.299 06:57:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:56.299 06:57:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:56.299 06:57:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3045403' 00:16:56.299 killing process with pid 3045403 00:16:56.299 06:57:03 -- common/autotest_common.sh@945 -- # kill 3045403 00:16:56.299 Received shutdown signal, test time was about 10.000000 seconds 00:16:56.299 00:16:56.299 Latency(us) 00:16:56.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.299 =================================================================================================================== 00:16:56.299 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:56.299 06:57:03 -- common/autotest_common.sh@950 -- # wait 3045403 00:16:56.557 06:57:03 -- target/tls.sh@37 -- # return 1 00:16:56.557 06:57:03 -- common/autotest_common.sh@643 -- # es=1 00:16:56.557 06:57:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:56.557 06:57:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:56.557 06:57:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:56.557 06:57:03 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:56.557 06:57:03 -- common/autotest_common.sh@640 -- # local es=0 00:16:56.557 06:57:03 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:56.557 06:57:03 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:56.557 06:57:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:56.557 06:57:03 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:56.557 06:57:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:56.557 06:57:03 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:56.557 06:57:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:56.557 06:57:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:56.557 06:57:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:56.557 06:57:03 -- target/tls.sh@23 -- # psk= 00:16:56.557 06:57:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:56.557 06:57:03 -- target/tls.sh@28 -- # bdevperf_pid=3045588 00:16:56.557 06:57:03 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:56.557 06:57:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:56.557 06:57:03 -- target/tls.sh@31 -- # waitforlisten 3045588 /var/tmp/bdevperf.sock 00:16:56.557 06:57:03 -- common/autotest_common.sh@819 -- # '[' -z 3045588 ']' 00:16:56.557 06:57:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.557 06:57:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:56.557 06:57:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.557 06:57:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:56.557 06:57:03 -- common/autotest_common.sh@10 -- # set +x 00:16:56.557 [2024-05-12 06:57:03.497509] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:56.557 [2024-05-12 06:57:03.497578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045588 ] 00:16:56.557 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.557 [2024-05-12 06:57:03.556571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.557 [2024-05-12 06:57:03.660198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.490 06:57:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:57.490 06:57:04 -- common/autotest_common.sh@852 -- # return 0 00:16:57.490 06:57:04 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:57.748 [2024-05-12 06:57:04.722234] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:57.748 [2024-05-12 06:57:04.724058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f7330 (9): Bad file descriptor 00:16:57.748 [2024-05-12 06:57:04.725056] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:57.748 [2024-05-12 06:57:04.725077] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:57.748 [2024-05-12 06:57:04.725094] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:57.748 request: 00:16:57.748 { 00:16:57.748 "name": "TLSTEST", 00:16:57.748 "trtype": "tcp", 00:16:57.748 "traddr": "10.0.0.2", 00:16:57.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.748 "adrfam": "ipv4", 00:16:57.748 "trsvcid": "4420", 00:16:57.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.748 "method": "bdev_nvme_attach_controller", 00:16:57.748 "req_id": 1 00:16:57.748 } 00:16:57.748 Got JSON-RPC error response 00:16:57.748 response: 00:16:57.748 { 00:16:57.748 "code": -32602, 00:16:57.748 "message": "Invalid parameters" 00:16:57.748 } 00:16:57.748 06:57:04 -- target/tls.sh@36 -- # killprocess 3045588 00:16:57.748 06:57:04 -- common/autotest_common.sh@926 -- # '[' -z 3045588 ']' 00:16:57.748 06:57:04 -- common/autotest_common.sh@930 -- # kill -0 3045588 00:16:57.748 06:57:04 -- common/autotest_common.sh@931 -- # uname 00:16:57.748 06:57:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:57.748 06:57:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3045588 00:16:57.748 06:57:04 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:57.748 06:57:04 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:57.748 06:57:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3045588' 00:16:57.748 killing process with pid 3045588 00:16:57.748 06:57:04 -- common/autotest_common.sh@945 -- # kill 3045588 00:16:57.748 Received shutdown signal, test time was about 10.000000 seconds 00:16:57.748 00:16:57.748 Latency(us) 00:16:57.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.748 =================================================================================================================== 00:16:57.748 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:57.748 06:57:04 -- common/autotest_common.sh@950 -- # wait 3045588 00:16:58.007 06:57:04 -- target/tls.sh@37 -- # return 1 00:16:58.007 06:57:04 -- common/autotest_common.sh@643 -- # es=1 00:16:58.007 06:57:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:58.007 06:57:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:58.007 06:57:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:58.007 06:57:05 -- target/tls.sh@167 -- # killprocess 3041651 00:16:58.007 06:57:05 -- common/autotest_common.sh@926 -- # '[' -z 3041651 ']' 00:16:58.007 06:57:05 -- common/autotest_common.sh@930 -- # kill -0 3041651 00:16:58.007 06:57:05 -- common/autotest_common.sh@931 -- # uname 00:16:58.007 06:57:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:58.007 06:57:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3041651 00:16:58.007 06:57:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:58.007 06:57:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:58.007 06:57:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3041651' 00:16:58.007 killing process with pid 3041651 00:16:58.007 06:57:05 -- common/autotest_common.sh@945 -- # kill 3041651 00:16:58.007 06:57:05 -- common/autotest_common.sh@950 -- # wait 3041651 00:16:58.265 06:57:05 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:16:58.265 06:57:05 -- target/tls.sh@49 -- # local key hash crc 00:16:58.265 06:57:05 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:58.265 06:57:05 -- target/tls.sh@51 -- # hash=02 00:16:58.265 06:57:05 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:16:58.265 06:57:05 -- target/tls.sh@52 -- # gzip -1 -c 00:16:58.265 06:57:05 -- target/tls.sh@52 -- # tail -c8 00:16:58.265 06:57:05 -- target/tls.sh@52 -- # head -c 4 00:16:58.265 06:57:05 -- target/tls.sh@52 -- # crc='�e�'\''' 00:16:58.265 06:57:05 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:58.265 06:57:05 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:16:58.265 06:57:05 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:58.265 06:57:05 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:58.265 06:57:05 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:16:58.265 06:57:05 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:58.265 06:57:05 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:16:58.265 06:57:05 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:16:58.265 06:57:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:58.265 06:57:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:58.265 06:57:05 -- common/autotest_common.sh@10 -- # set +x 00:16:58.265 06:57:05 -- nvmf/common.sh@469 -- # nvmfpid=3045843 00:16:58.265 06:57:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:58.265 06:57:05 -- nvmf/common.sh@470 -- # waitforlisten 3045843 00:16:58.265 06:57:05 -- common/autotest_common.sh@819 -- # '[' -z 3045843 ']' 00:16:58.265 06:57:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.265 06:57:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:58.265 06:57:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.265 06:57:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:58.265 06:57:05 -- common/autotest_common.sh@10 -- # set +x 00:16:58.265 [2024-05-12 06:57:05.387802] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:58.265 [2024-05-12 06:57:05.387877] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.524 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.524 [2024-05-12 06:57:05.451944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.524 [2024-05-12 06:57:05.560093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:58.524 [2024-05-12 06:57:05.560235] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.524 [2024-05-12 06:57:05.560252] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.524 [2024-05-12 06:57:05.560264] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.524 [2024-05-12 06:57:05.560291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.458 06:57:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:59.458 06:57:06 -- common/autotest_common.sh@852 -- # return 0 00:16:59.458 06:57:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:59.458 06:57:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:59.458 06:57:06 -- common/autotest_common.sh@10 -- # set +x 00:16:59.458 06:57:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.458 06:57:06 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:16:59.458 06:57:06 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:16:59.458 06:57:06 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:59.716 [2024-05-12 06:57:06.666031] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.716 06:57:06 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:59.974 06:57:06 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:00.232 [2024-05-12 06:57:07.183391] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:00.232 [2024-05-12 06:57:07.183618] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.232 06:57:07 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:00.490 malloc0 00:17:00.490 06:57:07 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:00.748 06:57:07 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:01.007 06:57:07 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:01.007 06:57:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:01.007 06:57:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:01.007 06:57:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:01.007 06:57:07 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:17:01.007 06:57:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.007 06:57:07 -- target/tls.sh@28 -- # bdevperf_pid=3046527 00:17:01.007 06:57:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:01.007 06:57:07 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:01.007 06:57:07 -- target/tls.sh@31 -- # waitforlisten 3046527 /var/tmp/bdevperf.sock 00:17:01.007 06:57:07 -- common/autotest_common.sh@819 -- # '[' -z 3046527 ']' 00:17:01.007 06:57:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.007 06:57:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:01.007 06:57:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.007 06:57:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:01.007 06:57:07 -- common/autotest_common.sh@10 -- # set +x 00:17:01.007 [2024-05-12 06:57:07.988996] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:01.007 [2024-05-12 06:57:07.989093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046527 ] 00:17:01.007 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.007 [2024-05-12 06:57:08.053521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.265 [2024-05-12 06:57:08.162654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.831 06:57:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:01.831 06:57:08 -- common/autotest_common.sh@852 -- # return 0 00:17:01.831 06:57:08 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:02.089 [2024-05-12 06:57:09.189428] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:02.347 TLSTESTn1 00:17:02.347 06:57:09 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:02.347 Running I/O for 10 seconds... 00:17:12.310 00:17:12.310 Latency(us) 00:17:12.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.311 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:12.311 Verification LBA range: start 0x0 length 0x2000 00:17:12.311 TLSTESTn1 : 10.03 1937.59 7.57 0.00 0.00 65962.10 7670.14 71458.51 00:17:12.311 =================================================================================================================== 00:17:12.311 Total : 1937.59 7.57 0.00 0.00 65962.10 7670.14 71458.51 00:17:12.311 0 00:17:12.569 06:57:19 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:12.569 06:57:19 -- target/tls.sh@45 -- # killprocess 3046527 00:17:12.569 06:57:19 -- common/autotest_common.sh@926 -- # '[' -z 3046527 ']' 00:17:12.569 06:57:19 -- common/autotest_common.sh@930 -- # kill -0 3046527 00:17:12.569 06:57:19 -- common/autotest_common.sh@931 -- # uname 00:17:12.569 06:57:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:12.569 06:57:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3046527 00:17:12.569 06:57:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:12.569 06:57:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:12.569 06:57:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3046527' 00:17:12.569 killing process with pid 3046527 00:17:12.569 06:57:19 -- common/autotest_common.sh@945 -- # kill 3046527 00:17:12.569 Received shutdown signal, test time was about 10.000000 seconds 00:17:12.569 00:17:12.569 Latency(us) 00:17:12.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.569 =================================================================================================================== 00:17:12.569 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.569 06:57:19 -- common/autotest_common.sh@950 -- # wait 3046527 00:17:12.827 06:57:19 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:12.827 06:57:19 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:12.827 06:57:19 -- common/autotest_common.sh@640 -- # local es=0 00:17:12.827 06:57:19 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:12.827 06:57:19 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:12.827 06:57:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:12.827 06:57:19 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:12.827 06:57:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:12.827 06:57:19 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:12.827 06:57:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:12.827 06:57:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:12.827 06:57:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:12.827 06:57:19 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:17:12.827 06:57:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:12.827 06:57:19 -- target/tls.sh@28 -- # bdevperf_pid=3048148 00:17:12.827 06:57:19 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:12.827 06:57:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:12.827 06:57:19 -- target/tls.sh@31 -- # waitforlisten 3048148 /var/tmp/bdevperf.sock 00:17:12.827 06:57:19 -- common/autotest_common.sh@819 -- # '[' -z 3048148 ']' 00:17:12.827 06:57:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.827 06:57:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:12.827 06:57:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.827 06:57:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:12.827 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:17:12.827 [2024-05-12 06:57:19.779056] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:12.828 [2024-05-12 06:57:19.779140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3048148 ] 00:17:12.828 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.828 [2024-05-12 06:57:19.836371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.828 [2024-05-12 06:57:19.937303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.795 06:57:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:13.795 06:57:20 -- common/autotest_common.sh@852 -- # return 0 00:17:13.795 06:57:20 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:14.053 [2024-05-12 06:57:20.960917] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:14.053 [2024-05-12 06:57:20.960971] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:14.053 request: 00:17:14.053 { 00:17:14.053 "name": "TLSTEST", 00:17:14.053 "trtype": "tcp", 00:17:14.053 "traddr": "10.0.0.2", 00:17:14.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.053 "adrfam": "ipv4", 00:17:14.053 "trsvcid": "4420", 00:17:14.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.053 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:14.053 "method": "bdev_nvme_attach_controller", 00:17:14.053 "req_id": 1 00:17:14.053 } 00:17:14.053 Got JSON-RPC error response 00:17:14.053 response: 00:17:14.053 { 00:17:14.053 "code": -22, 00:17:14.053 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:14.053 } 00:17:14.053 06:57:20 -- target/tls.sh@36 -- # killprocess 3048148 00:17:14.053 06:57:20 -- common/autotest_common.sh@926 -- # '[' -z 3048148 ']' 00:17:14.053 06:57:20 -- common/autotest_common.sh@930 -- # kill -0 3048148 00:17:14.053 06:57:20 -- common/autotest_common.sh@931 -- # uname 00:17:14.053 06:57:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:14.053 06:57:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3048148 00:17:14.053 06:57:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:14.053 06:57:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:14.053 06:57:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3048148' 00:17:14.053 killing process with pid 3048148 00:17:14.053 06:57:21 -- common/autotest_common.sh@945 -- # kill 3048148 00:17:14.053 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.053 00:17:14.053 Latency(us) 00:17:14.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.053 =================================================================================================================== 00:17:14.053 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.053 06:57:21 -- common/autotest_common.sh@950 -- # wait 3048148 00:17:14.311 06:57:21 -- target/tls.sh@37 -- # return 1 00:17:14.311 06:57:21 -- common/autotest_common.sh@643 -- # es=1 00:17:14.311 06:57:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:14.311 06:57:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:14.311 06:57:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:14.311 06:57:21 -- target/tls.sh@183 -- # killprocess 3045843 00:17:14.311 06:57:21 -- common/autotest_common.sh@926 -- # '[' -z 3045843 ']' 00:17:14.311 06:57:21 -- common/autotest_common.sh@930 -- # kill -0 3045843 00:17:14.311 06:57:21 -- common/autotest_common.sh@931 -- # uname 00:17:14.311 06:57:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:14.311 06:57:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3045843 00:17:14.311 06:57:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:14.311 06:57:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:14.311 06:57:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3045843' 00:17:14.311 killing process with pid 3045843 00:17:14.311 06:57:21 -- common/autotest_common.sh@945 -- # kill 3045843 00:17:14.311 06:57:21 -- common/autotest_common.sh@950 -- # wait 3045843 00:17:14.570 06:57:21 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:14.570 06:57:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:14.570 06:57:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:14.570 06:57:21 -- common/autotest_common.sh@10 -- # set +x 00:17:14.570 06:57:21 -- nvmf/common.sh@469 -- # nvmfpid=3048364 00:17:14.570 06:57:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:14.570 06:57:21 -- nvmf/common.sh@470 -- # waitforlisten 3048364 00:17:14.570 06:57:21 -- common/autotest_common.sh@819 -- # '[' -z 3048364 ']' 00:17:14.570 06:57:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.570 06:57:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:14.570 06:57:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.570 06:57:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:14.570 06:57:21 -- common/autotest_common.sh@10 -- # set +x 00:17:14.570 [2024-05-12 06:57:21.640760] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:14.570 [2024-05-12 06:57:21.640843] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.570 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.828 [2024-05-12 06:57:21.709577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.828 [2024-05-12 06:57:21.822325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:14.828 [2024-05-12 06:57:21.822505] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.828 [2024-05-12 06:57:21.822526] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.828 [2024-05-12 06:57:21.822550] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.828 [2024-05-12 06:57:21.822581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.761 06:57:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:15.761 06:57:22 -- common/autotest_common.sh@852 -- # return 0 00:17:15.761 06:57:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:15.761 06:57:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:15.761 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:17:15.761 06:57:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.761 06:57:22 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:15.761 06:57:22 -- common/autotest_common.sh@640 -- # local es=0 00:17:15.761 06:57:22 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:15.761 06:57:22 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:15.761 06:57:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:15.761 06:57:22 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:15.761 06:57:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:15.761 06:57:22 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:15.761 06:57:22 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:15.761 06:57:22 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:15.761 [2024-05-12 06:57:22.856803] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.761 06:57:22 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:16.019 06:57:23 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:16.277 [2024-05-12 06:57:23.366135] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:16.277 [2024-05-12 06:57:23.366368] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.277 06:57:23 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:16.535 malloc0 00:17:16.535 06:57:23 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:16.792 06:57:23 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:17.050 [2024-05-12 06:57:24.095778] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:17.050 [2024-05-12 06:57:24.095819] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:17.050 [2024-05-12 06:57:24.095840] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:17.050 request: 00:17:17.050 { 00:17:17.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.050 "host": "nqn.2016-06.io.spdk:host1", 00:17:17.050 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:17.050 "method": "nvmf_subsystem_add_host", 00:17:17.050 "req_id": 1 00:17:17.050 } 00:17:17.050 Got JSON-RPC error response 00:17:17.050 response: 00:17:17.050 { 00:17:17.050 "code": -32603, 00:17:17.050 "message": "Internal error" 00:17:17.050 } 00:17:17.050 06:57:24 -- common/autotest_common.sh@643 -- # es=1 00:17:17.050 06:57:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:17.050 06:57:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:17.050 06:57:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:17.050 06:57:24 -- target/tls.sh@189 -- # killprocess 3048364 00:17:17.050 06:57:24 -- common/autotest_common.sh@926 -- # '[' -z 3048364 ']' 00:17:17.050 06:57:24 -- common/autotest_common.sh@930 -- # kill -0 3048364 00:17:17.050 06:57:24 -- common/autotest_common.sh@931 -- # uname 00:17:17.050 06:57:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:17.050 06:57:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3048364 00:17:17.050 06:57:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:17.050 06:57:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:17.050 06:57:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3048364' 00:17:17.050 killing process with pid 3048364 00:17:17.050 06:57:24 -- common/autotest_common.sh@945 -- # kill 3048364 00:17:17.050 06:57:24 -- common/autotest_common.sh@950 -- # wait 3048364 00:17:17.308 06:57:24 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:17.308 06:57:24 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:17.308 06:57:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:17.308 06:57:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:17.308 06:57:24 -- common/autotest_common.sh@10 -- # set +x 00:17:17.568 06:57:24 -- nvmf/common.sh@469 -- # nvmfpid=3048742 00:17:17.568 06:57:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:17.568 06:57:24 -- nvmf/common.sh@470 -- # waitforlisten 3048742 00:17:17.568 06:57:24 -- common/autotest_common.sh@819 -- # '[' -z 3048742 ']' 00:17:17.568 06:57:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.568 06:57:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:17.568 06:57:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.568 06:57:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:17.568 06:57:24 -- common/autotest_common.sh@10 -- # set +x 00:17:17.568 [2024-05-12 06:57:24.480024] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:17.568 [2024-05-12 06:57:24.480126] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.568 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.568 [2024-05-12 06:57:24.547693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.568 [2024-05-12 06:57:24.660035] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:17.568 [2024-05-12 06:57:24.660210] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.568 [2024-05-12 06:57:24.660232] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.568 [2024-05-12 06:57:24.660247] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.568 [2024-05-12 06:57:24.660277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.501 06:57:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:18.501 06:57:25 -- common/autotest_common.sh@852 -- # return 0 00:17:18.501 06:57:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:18.501 06:57:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:18.501 06:57:25 -- common/autotest_common.sh@10 -- # set +x 00:17:18.501 06:57:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.501 06:57:25 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:18.501 06:57:25 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:18.501 06:57:25 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:18.760 [2024-05-12 06:57:25.636167] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.760 06:57:25 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:19.018 06:57:25 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:19.018 [2024-05-12 06:57:26.121474] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:19.018 [2024-05-12 06:57:26.121714] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.018 06:57:26 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:19.276 malloc0 00:17:19.276 06:57:26 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:19.535 06:57:26 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:19.793 06:57:26 -- target/tls.sh@197 -- # bdevperf_pid=3049045 00:17:19.793 06:57:26 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.793 06:57:26 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.793 06:57:26 -- target/tls.sh@200 -- # waitforlisten 3049045 /var/tmp/bdevperf.sock 00:17:19.793 06:57:26 -- common/autotest_common.sh@819 -- # '[' -z 3049045 ']' 00:17:19.793 06:57:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.793 06:57:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:19.793 06:57:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.793 06:57:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:19.793 06:57:26 -- common/autotest_common.sh@10 -- # set +x 00:17:19.793 [2024-05-12 06:57:26.902315] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:19.793 [2024-05-12 06:57:26.902402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3049045 ] 00:17:20.051 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.051 [2024-05-12 06:57:26.961382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.051 [2024-05-12 06:57:27.066467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.991 06:57:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:20.991 06:57:27 -- common/autotest_common.sh@852 -- # return 0 00:17:20.991 06:57:27 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:20.991 [2024-05-12 06:57:28.091565] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:21.249 TLSTESTn1 00:17:21.249 06:57:28 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:21.507 06:57:28 -- target/tls.sh@205 -- # tgtconf='{ 00:17:21.507 "subsystems": [ 00:17:21.507 { 00:17:21.507 "subsystem": "iobuf", 00:17:21.507 "config": [ 00:17:21.507 { 00:17:21.507 "method": "iobuf_set_options", 00:17:21.507 "params": { 00:17:21.507 "small_pool_count": 8192, 00:17:21.507 "large_pool_count": 1024, 00:17:21.507 "small_bufsize": 8192, 00:17:21.507 "large_bufsize": 135168 00:17:21.507 } 00:17:21.507 } 00:17:21.507 ] 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "subsystem": "sock", 00:17:21.507 "config": [ 00:17:21.507 { 00:17:21.507 "method": "sock_impl_set_options", 00:17:21.507 "params": { 00:17:21.507 "impl_name": "posix", 00:17:21.507 "recv_buf_size": 2097152, 00:17:21.507 "send_buf_size": 2097152, 00:17:21.507 "enable_recv_pipe": true, 00:17:21.507 "enable_quickack": false, 00:17:21.507 "enable_placement_id": 0, 00:17:21.507 "enable_zerocopy_send_server": true, 00:17:21.507 "enable_zerocopy_send_client": false, 00:17:21.507 "zerocopy_threshold": 0, 00:17:21.507 "tls_version": 0, 00:17:21.507 "enable_ktls": false 00:17:21.507 } 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "method": "sock_impl_set_options", 00:17:21.507 "params": { 00:17:21.507 "impl_name": "ssl", 00:17:21.507 "recv_buf_size": 4096, 00:17:21.507 "send_buf_size": 4096, 00:17:21.507 "enable_recv_pipe": true, 00:17:21.507 "enable_quickack": false, 00:17:21.507 "enable_placement_id": 0, 00:17:21.507 "enable_zerocopy_send_server": true, 00:17:21.507 "enable_zerocopy_send_client": false, 00:17:21.507 "zerocopy_threshold": 0, 00:17:21.507 "tls_version": 0, 00:17:21.507 "enable_ktls": false 00:17:21.507 } 00:17:21.507 } 00:17:21.507 ] 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "subsystem": "vmd", 00:17:21.507 "config": [] 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "subsystem": "accel", 00:17:21.507 "config": [ 00:17:21.507 { 00:17:21.507 "method": "accel_set_options", 00:17:21.507 "params": { 00:17:21.507 "small_cache_size": 128, 00:17:21.507 "large_cache_size": 16, 00:17:21.507 "task_count": 2048, 00:17:21.507 "sequence_count": 2048, 00:17:21.507 "buf_count": 2048 00:17:21.507 } 00:17:21.507 } 00:17:21.507 ] 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "subsystem": "bdev", 00:17:21.507 "config": [ 00:17:21.507 { 00:17:21.507 "method": "bdev_set_options", 00:17:21.507 "params": { 00:17:21.507 "bdev_io_pool_size": 65535, 00:17:21.507 "bdev_io_cache_size": 256, 00:17:21.507 "bdev_auto_examine": true, 00:17:21.507 "iobuf_small_cache_size": 128, 00:17:21.507 "iobuf_large_cache_size": 16 00:17:21.507 } 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "method": "bdev_raid_set_options", 00:17:21.507 "params": { 00:17:21.507 "process_window_size_kb": 1024 00:17:21.507 } 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "method": "bdev_iscsi_set_options", 00:17:21.507 "params": { 00:17:21.507 "timeout_sec": 30 00:17:21.507 } 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "method": "bdev_nvme_set_options", 00:17:21.507 "params": { 00:17:21.507 "action_on_timeout": "none", 00:17:21.507 "timeout_us": 0, 00:17:21.507 "timeout_admin_us": 0, 00:17:21.507 "keep_alive_timeout_ms": 10000, 00:17:21.507 "transport_retry_count": 4, 00:17:21.507 "arbitration_burst": 0, 00:17:21.507 "low_priority_weight": 0, 00:17:21.507 "medium_priority_weight": 0, 00:17:21.507 "high_priority_weight": 0, 00:17:21.507 "nvme_adminq_poll_period_us": 10000, 00:17:21.507 "nvme_ioq_poll_period_us": 0, 00:17:21.507 "io_queue_requests": 0, 00:17:21.507 "delay_cmd_submit": true, 00:17:21.507 "bdev_retry_count": 3, 00:17:21.507 "transport_ack_timeout": 0, 00:17:21.507 "ctrlr_loss_timeout_sec": 0, 00:17:21.507 "reconnect_delay_sec": 0, 00:17:21.507 "fast_io_fail_timeout_sec": 0, 00:17:21.507 "generate_uuids": false, 00:17:21.507 "transport_tos": 0, 00:17:21.507 "io_path_stat": false, 00:17:21.507 "allow_accel_sequence": false 00:17:21.507 } 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "method": "bdev_nvme_set_hotplug", 00:17:21.507 "params": { 00:17:21.507 "period_us": 100000, 00:17:21.507 "enable": false 00:17:21.507 } 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "method": "bdev_malloc_create", 00:17:21.507 "params": { 00:17:21.507 "name": "malloc0", 00:17:21.507 "num_blocks": 8192, 00:17:21.507 "block_size": 4096, 00:17:21.507 "physical_block_size": 4096, 00:17:21.507 "uuid": "9f2f0938-7b10-44e2-a4c2-60b1f732aab1", 00:17:21.507 "optimal_io_boundary": 0 00:17:21.507 } 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "method": "bdev_wait_for_examine" 00:17:21.507 } 00:17:21.507 ] 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "subsystem": "nbd", 00:17:21.507 "config": [] 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "subsystem": "scheduler", 00:17:21.507 "config": [ 00:17:21.507 { 00:17:21.507 "method": "framework_set_scheduler", 00:17:21.507 "params": { 00:17:21.507 "name": "static" 00:17:21.507 } 00:17:21.507 } 00:17:21.507 ] 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "subsystem": "nvmf", 00:17:21.507 "config": [ 00:17:21.507 { 00:17:21.507 "method": "nvmf_set_config", 00:17:21.507 "params": { 00:17:21.507 "discovery_filter": "match_any", 00:17:21.507 "admin_cmd_passthru": { 00:17:21.507 "identify_ctrlr": false 00:17:21.507 } 00:17:21.507 } 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "method": "nvmf_set_max_subsystems", 00:17:21.507 "params": { 00:17:21.507 "max_subsystems": 1024 00:17:21.507 } 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "method": "nvmf_set_crdt", 00:17:21.507 "params": { 00:17:21.507 "crdt1": 0, 00:17:21.507 "crdt2": 0, 00:17:21.507 "crdt3": 0 00:17:21.507 } 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "method": "nvmf_create_transport", 00:17:21.507 "params": { 00:17:21.507 "trtype": "TCP", 00:17:21.507 "max_queue_depth": 128, 00:17:21.507 "max_io_qpairs_per_ctrlr": 127, 00:17:21.507 "in_capsule_data_size": 4096, 00:17:21.507 "max_io_size": 131072, 00:17:21.507 "io_unit_size": 131072, 00:17:21.507 "max_aq_depth": 128, 00:17:21.507 "num_shared_buffers": 511, 00:17:21.507 "buf_cache_size": 4294967295, 00:17:21.507 "dif_insert_or_strip": false, 00:17:21.507 "zcopy": false, 00:17:21.507 "c2h_success": false, 00:17:21.507 "sock_priority": 0, 00:17:21.507 "abort_timeout_sec": 1 00:17:21.507 } 00:17:21.507 }, 00:17:21.507 { 00:17:21.507 "method": "nvmf_create_subsystem", 00:17:21.507 "params": { 00:17:21.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.507 "allow_any_host": false, 00:17:21.507 "serial_number": "SPDK00000000000001", 00:17:21.507 "model_number": "SPDK bdev Controller", 00:17:21.507 "max_namespaces": 10, 00:17:21.507 "min_cntlid": 1, 00:17:21.507 "max_cntlid": 65519, 00:17:21.507 "ana_reporting": false 00:17:21.507 } 00:17:21.507 }, 00:17:21.508 { 00:17:21.508 "method": "nvmf_subsystem_add_host", 00:17:21.508 "params": { 00:17:21.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.508 "host": "nqn.2016-06.io.spdk:host1", 00:17:21.508 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:21.508 } 00:17:21.508 }, 00:17:21.508 { 00:17:21.508 "method": "nvmf_subsystem_add_ns", 00:17:21.508 "params": { 00:17:21.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.508 "namespace": { 00:17:21.508 "nsid": 1, 00:17:21.508 "bdev_name": "malloc0", 00:17:21.508 "nguid": "9F2F09387B1044E2A4C260B1F732AAB1", 00:17:21.508 "uuid": "9f2f0938-7b10-44e2-a4c2-60b1f732aab1" 00:17:21.508 } 00:17:21.508 } 00:17:21.508 }, 00:17:21.508 { 00:17:21.508 "method": "nvmf_subsystem_add_listener", 00:17:21.508 "params": { 00:17:21.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.508 "listen_address": { 00:17:21.508 "trtype": "TCP", 00:17:21.508 "adrfam": "IPv4", 00:17:21.508 "traddr": "10.0.0.2", 00:17:21.508 "trsvcid": "4420" 00:17:21.508 }, 00:17:21.508 "secure_channel": true 00:17:21.508 } 00:17:21.508 } 00:17:21.508 ] 00:17:21.508 } 00:17:21.508 ] 00:17:21.508 }' 00:17:21.508 06:57:28 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:21.767 06:57:28 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:21.767 "subsystems": [ 00:17:21.767 { 00:17:21.767 "subsystem": "iobuf", 00:17:21.767 "config": [ 00:17:21.767 { 00:17:21.767 "method": "iobuf_set_options", 00:17:21.767 "params": { 00:17:21.767 "small_pool_count": 8192, 00:17:21.767 "large_pool_count": 1024, 00:17:21.767 "small_bufsize": 8192, 00:17:21.767 "large_bufsize": 135168 00:17:21.767 } 00:17:21.767 } 00:17:21.767 ] 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "subsystem": "sock", 00:17:21.767 "config": [ 00:17:21.767 { 00:17:21.767 "method": "sock_impl_set_options", 00:17:21.767 "params": { 00:17:21.767 "impl_name": "posix", 00:17:21.767 "recv_buf_size": 2097152, 00:17:21.767 "send_buf_size": 2097152, 00:17:21.767 "enable_recv_pipe": true, 00:17:21.767 "enable_quickack": false, 00:17:21.767 "enable_placement_id": 0, 00:17:21.767 "enable_zerocopy_send_server": true, 00:17:21.767 "enable_zerocopy_send_client": false, 00:17:21.767 "zerocopy_threshold": 0, 00:17:21.767 "tls_version": 0, 00:17:21.767 "enable_ktls": false 00:17:21.767 } 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "method": "sock_impl_set_options", 00:17:21.767 "params": { 00:17:21.767 "impl_name": "ssl", 00:17:21.767 "recv_buf_size": 4096, 00:17:21.767 "send_buf_size": 4096, 00:17:21.767 "enable_recv_pipe": true, 00:17:21.767 "enable_quickack": false, 00:17:21.767 "enable_placement_id": 0, 00:17:21.767 "enable_zerocopy_send_server": true, 00:17:21.767 "enable_zerocopy_send_client": false, 00:17:21.767 "zerocopy_threshold": 0, 00:17:21.767 "tls_version": 0, 00:17:21.767 "enable_ktls": false 00:17:21.767 } 00:17:21.767 } 00:17:21.767 ] 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "subsystem": "vmd", 00:17:21.767 "config": [] 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "subsystem": "accel", 00:17:21.767 "config": [ 00:17:21.767 { 00:17:21.767 "method": "accel_set_options", 00:17:21.767 "params": { 00:17:21.767 "small_cache_size": 128, 00:17:21.767 "large_cache_size": 16, 00:17:21.767 "task_count": 2048, 00:17:21.767 "sequence_count": 2048, 00:17:21.767 "buf_count": 2048 00:17:21.767 } 00:17:21.767 } 00:17:21.767 ] 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "subsystem": "bdev", 00:17:21.767 "config": [ 00:17:21.767 { 00:17:21.767 "method": "bdev_set_options", 00:17:21.767 "params": { 00:17:21.767 "bdev_io_pool_size": 65535, 00:17:21.767 "bdev_io_cache_size": 256, 00:17:21.767 "bdev_auto_examine": true, 00:17:21.767 "iobuf_small_cache_size": 128, 00:17:21.767 "iobuf_large_cache_size": 16 00:17:21.767 } 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "method": "bdev_raid_set_options", 00:17:21.767 "params": { 00:17:21.767 "process_window_size_kb": 1024 00:17:21.767 } 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "method": "bdev_iscsi_set_options", 00:17:21.767 "params": { 00:17:21.767 "timeout_sec": 30 00:17:21.767 } 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "method": "bdev_nvme_set_options", 00:17:21.767 "params": { 00:17:21.767 "action_on_timeout": "none", 00:17:21.767 "timeout_us": 0, 00:17:21.767 "timeout_admin_us": 0, 00:17:21.767 "keep_alive_timeout_ms": 10000, 00:17:21.767 "transport_retry_count": 4, 00:17:21.767 "arbitration_burst": 0, 00:17:21.767 "low_priority_weight": 0, 00:17:21.767 "medium_priority_weight": 0, 00:17:21.767 "high_priority_weight": 0, 00:17:21.767 "nvme_adminq_poll_period_us": 10000, 00:17:21.767 "nvme_ioq_poll_period_us": 0, 00:17:21.767 "io_queue_requests": 512, 00:17:21.767 "delay_cmd_submit": true, 00:17:21.767 "bdev_retry_count": 3, 00:17:21.767 "transport_ack_timeout": 0, 00:17:21.767 "ctrlr_loss_timeout_sec": 0, 00:17:21.767 "reconnect_delay_sec": 0, 00:17:21.767 "fast_io_fail_timeout_sec": 0, 00:17:21.767 "generate_uuids": false, 00:17:21.767 "transport_tos": 0, 00:17:21.767 "io_path_stat": false, 00:17:21.767 "allow_accel_sequence": false 00:17:21.767 } 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "method": "bdev_nvme_attach_controller", 00:17:21.767 "params": { 00:17:21.767 "name": "TLSTEST", 00:17:21.767 "trtype": "TCP", 00:17:21.767 "adrfam": "IPv4", 00:17:21.767 "traddr": "10.0.0.2", 00:17:21.767 "trsvcid": "4420", 00:17:21.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.767 "prchk_reftag": false, 00:17:21.767 "prchk_guard": false, 00:17:21.767 "ctrlr_loss_timeout_sec": 0, 00:17:21.767 "reconnect_delay_sec": 0, 00:17:21.767 "fast_io_fail_timeout_sec": 0, 00:17:21.767 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:21.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.767 "hdgst": false, 00:17:21.767 "ddgst": false 00:17:21.767 } 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "method": "bdev_nvme_set_hotplug", 00:17:21.767 "params": { 00:17:21.767 "period_us": 100000, 00:17:21.767 "enable": false 00:17:21.767 } 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "method": "bdev_wait_for_examine" 00:17:21.767 } 00:17:21.767 ] 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "subsystem": "nbd", 00:17:21.767 "config": [] 00:17:21.767 } 00:17:21.767 ] 00:17:21.767 }' 00:17:21.767 06:57:28 -- target/tls.sh@208 -- # killprocess 3049045 00:17:21.767 06:57:28 -- common/autotest_common.sh@926 -- # '[' -z 3049045 ']' 00:17:21.767 06:57:28 -- common/autotest_common.sh@930 -- # kill -0 3049045 00:17:21.767 06:57:28 -- common/autotest_common.sh@931 -- # uname 00:17:21.767 06:57:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.767 06:57:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3049045 00:17:21.767 06:57:28 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:21.768 06:57:28 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:21.768 06:57:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3049045' 00:17:21.768 killing process with pid 3049045 00:17:21.768 06:57:28 -- common/autotest_common.sh@945 -- # kill 3049045 00:17:21.768 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.768 00:17:21.768 Latency(us) 00:17:21.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.768 =================================================================================================================== 00:17:21.768 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.768 06:57:28 -- common/autotest_common.sh@950 -- # wait 3049045 00:17:22.026 06:57:29 -- target/tls.sh@209 -- # killprocess 3048742 00:17:22.026 06:57:29 -- common/autotest_common.sh@926 -- # '[' -z 3048742 ']' 00:17:22.026 06:57:29 -- common/autotest_common.sh@930 -- # kill -0 3048742 00:17:22.026 06:57:29 -- common/autotest_common.sh@931 -- # uname 00:17:22.285 06:57:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:22.285 06:57:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3048742 00:17:22.285 06:57:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:22.285 06:57:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:22.285 06:57:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3048742' 00:17:22.285 killing process with pid 3048742 00:17:22.285 06:57:29 -- common/autotest_common.sh@945 -- # kill 3048742 00:17:22.285 06:57:29 -- common/autotest_common.sh@950 -- # wait 3048742 00:17:22.544 06:57:29 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:22.544 06:57:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.544 06:57:29 -- target/tls.sh@212 -- # echo '{ 00:17:22.544 "subsystems": [ 00:17:22.544 { 00:17:22.544 "subsystem": "iobuf", 00:17:22.544 "config": [ 00:17:22.544 { 00:17:22.544 "method": "iobuf_set_options", 00:17:22.544 "params": { 00:17:22.544 "small_pool_count": 8192, 00:17:22.544 "large_pool_count": 1024, 00:17:22.544 "small_bufsize": 8192, 00:17:22.544 "large_bufsize": 135168 00:17:22.544 } 00:17:22.544 } 00:17:22.544 ] 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "subsystem": "sock", 00:17:22.544 "config": [ 00:17:22.544 { 00:17:22.544 "method": "sock_impl_set_options", 00:17:22.544 "params": { 00:17:22.544 "impl_name": "posix", 00:17:22.544 "recv_buf_size": 2097152, 00:17:22.544 "send_buf_size": 2097152, 00:17:22.544 "enable_recv_pipe": true, 00:17:22.544 "enable_quickack": false, 00:17:22.544 "enable_placement_id": 0, 00:17:22.544 "enable_zerocopy_send_server": true, 00:17:22.544 "enable_zerocopy_send_client": false, 00:17:22.544 "zerocopy_threshold": 0, 00:17:22.544 "tls_version": 0, 00:17:22.544 "enable_ktls": false 00:17:22.544 } 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "method": "sock_impl_set_options", 00:17:22.544 "params": { 00:17:22.544 "impl_name": "ssl", 00:17:22.544 "recv_buf_size": 4096, 00:17:22.544 "send_buf_size": 4096, 00:17:22.544 "enable_recv_pipe": true, 00:17:22.544 "enable_quickack": false, 00:17:22.544 "enable_placement_id": 0, 00:17:22.544 "enable_zerocopy_send_server": true, 00:17:22.544 "enable_zerocopy_send_client": false, 00:17:22.544 "zerocopy_threshold": 0, 00:17:22.544 "tls_version": 0, 00:17:22.544 "enable_ktls": false 00:17:22.544 } 00:17:22.544 } 00:17:22.544 ] 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "subsystem": "vmd", 00:17:22.544 "config": [] 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "subsystem": "accel", 00:17:22.544 "config": [ 00:17:22.544 { 00:17:22.544 "method": "accel_set_options", 00:17:22.544 "params": { 00:17:22.544 "small_cache_size": 128, 00:17:22.544 "large_cache_size": 16, 00:17:22.544 "task_count": 2048, 00:17:22.544 "sequence_count": 2048, 00:17:22.544 "buf_count": 2048 00:17:22.544 } 00:17:22.544 } 00:17:22.544 ] 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "subsystem": "bdev", 00:17:22.544 "config": [ 00:17:22.544 { 00:17:22.544 "method": "bdev_set_options", 00:17:22.544 "params": { 00:17:22.544 "bdev_io_pool_size": 65535, 00:17:22.544 "bdev_io_cache_size": 256, 00:17:22.544 "bdev_auto_examine": true, 00:17:22.544 "iobuf_small_cache_size": 128, 00:17:22.544 "iobuf_large_cache_size": 16 00:17:22.544 } 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "method": "bdev_raid_set_options", 00:17:22.544 "params": { 00:17:22.544 "process_window_size_kb": 1024 00:17:22.544 } 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "method": "bdev_iscsi_set_options", 00:17:22.544 "params": { 00:17:22.544 "timeout_sec": 30 00:17:22.544 } 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "method": "bdev_nvme_set_options", 00:17:22.544 "params": { 00:17:22.544 "action_on_timeout": "none", 00:17:22.544 "timeout_us": 0, 00:17:22.544 "timeout_admin_us": 0, 00:17:22.544 "keep_alive_timeout_ms": 10000, 00:17:22.544 "transport_retry_count": 4, 00:17:22.544 "arbitration_burst": 0, 00:17:22.544 "low_priority_weight": 0, 00:17:22.544 "medium_priority_weight": 0, 00:17:22.544 "high_priority_weight": 0, 00:17:22.544 "nvme_adminq_poll_period_us": 10000, 00:17:22.544 "nvme_ioq_poll_period_us": 0, 00:17:22.544 "io_queue_requests": 0, 00:17:22.544 "delay_cmd_submit": true, 00:17:22.544 "bdev_retry_count": 3, 00:17:22.544 "transport_ack_timeout": 0, 00:17:22.544 "ctrlr_loss_timeout_sec": 0, 00:17:22.544 "reconnect_delay_sec": 0, 00:17:22.544 "fast_io_fail_timeout_sec": 0, 00:17:22.544 "generate_uuids": false, 00:17:22.544 "transport_tos": 0, 00:17:22.544 "io_path_stat": false, 00:17:22.544 "allow_accel_sequence": false 00:17:22.544 } 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "method": "bdev_nvme_set_hotplug", 00:17:22.544 "params": { 00:17:22.544 "period_us": 100000, 00:17:22.544 "enable": false 00:17:22.544 } 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "method": "bdev_malloc_create", 00:17:22.544 "params": { 00:17:22.544 "name": "malloc0", 00:17:22.544 "num_blocks": 8192, 00:17:22.544 "block_size": 4096, 00:17:22.544 "physical_block_size": 4096, 00:17:22.544 "uuid": "9f2f0938-7b10-44e2-a4c2-60b1f732aab1", 00:17:22.544 "optimal_io_boundary": 0 00:17:22.544 } 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "method": "bdev_wait_for_examine" 00:17:22.544 } 00:17:22.544 ] 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "subsystem": "nbd", 00:17:22.544 "config": [] 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "subsystem": "scheduler", 00:17:22.544 "config": [ 00:17:22.544 { 00:17:22.544 "method": "framework_set_scheduler", 00:17:22.544 "params": { 00:17:22.544 "name": "static" 00:17:22.544 } 00:17:22.544 } 00:17:22.544 ] 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "subsystem": "nvmf", 00:17:22.544 "config": [ 00:17:22.544 { 00:17:22.544 "method": "nvmf_set_config", 00:17:22.544 "params": { 00:17:22.544 "discovery_filter": "match_any", 00:17:22.544 "admin_cmd_passthru": { 00:17:22.544 "identify_ctrlr": false 00:17:22.544 } 00:17:22.544 } 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "method": "nvmf_set_max_subsystems", 00:17:22.544 "params": { 00:17:22.544 "max_subsystems": 1024 00:17:22.544 } 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "method": "nvmf_set_crdt", 00:17:22.544 "params": { 00:17:22.544 "crdt1": 0, 00:17:22.544 "crdt2": 0, 00:17:22.544 "crdt3": 0 00:17:22.544 } 00:17:22.544 }, 00:17:22.544 { 00:17:22.544 "method": "nvmf_create_transport", 00:17:22.544 "params": { 00:17:22.544 "trtype": "TCP", 00:17:22.544 "max_queue_depth": 128, 00:17:22.544 "max_io_qpairs_per_ctrlr": 127, 00:17:22.544 "in_capsule_data_size": 4096, 00:17:22.544 "max_io_size": 131072, 00:17:22.544 "io_unit_size": 131072, 00:17:22.544 "max_aq_depth": 128, 00:17:22.545 "num_shared_buffers": 511, 00:17:22.545 "buf_cache_size": 4294967295, 00:17:22.545 "dif_insert_or_strip": false, 00:17:22.545 "zcopy": false, 00:17:22.545 "c2h_success": false, 00:17:22.545 "sock_priority": 0, 00:17:22.545 "abort_timeout_sec": 1 00:17:22.545 } 00:17:22.545 }, 00:17:22.545 { 00:17:22.545 "method": "nvmf_create_subsystem", 00:17:22.545 "params": { 00:17:22.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.545 "allow_any_host": false, 00:17:22.545 "serial_number": "SPDK00000000000001", 00:17:22.545 "model_number": "SPDK bdev Controller", 00:17:22.545 "max_namespaces": 10, 00:17:22.545 "min_cntlid": 1, 00:17:22.545 "max_cntlid": 65519, 00:17:22.545 "ana_reporting": false 00:17:22.545 } 00:17:22.545 }, 00:17:22.545 { 00:17:22.545 "method": "nvmf_subsystem_add_host", 00:17:22.545 "params": { 00:17:22.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.545 "host": "nqn.2016-06.io.spdk:host1", 00:17:22.545 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:22.545 } 00:17:22.545 }, 00:17:22.545 { 00:17:22.545 "method": "nvmf_subsystem_add_ns", 00:17:22.545 "params": { 00:17:22.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.545 "namespace": { 00:17:22.545 "nsid": 1, 00:17:22.545 "bdev_name": "malloc0", 00:17:22.545 "nguid": "9F2F09387B1044E2A4C260B1F732AAB1", 00:17:22.545 "uuid": "9f2f0938-7b10-44e2-a4c2-60b1f732aab1" 00:17:22.545 } 00:17:22.545 } 00:17:22.545 }, 00:17:22.545 { 00:17:22.545 "method": "nvmf_subsystem_add_listener", 00:17:22.545 "params": { 00:17:22.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.545 "listen_address": { 00:17:22.545 "trtype": "TCP", 00:17:22.545 "adrfam": "IPv4", 00:17:22.545 "traddr": "10.0.0.2", 00:17:22.545 "trsvcid": "4420" 00:17:22.545 }, 00:17:22.545 "secure_channel": true 00:17:22.545 } 00:17:22.545 } 00:17:22.545 ] 00:17:22.545 } 00:17:22.545 ] 00:17:22.545 }' 00:17:22.545 06:57:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:22.545 06:57:29 -- common/autotest_common.sh@10 -- # set +x 00:17:22.545 06:57:29 -- nvmf/common.sh@469 -- # nvmfpid=3049379 00:17:22.545 06:57:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:22.545 06:57:29 -- nvmf/common.sh@470 -- # waitforlisten 3049379 00:17:22.545 06:57:29 -- common/autotest_common.sh@819 -- # '[' -z 3049379 ']' 00:17:22.545 06:57:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.545 06:57:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:22.545 06:57:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.545 06:57:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:22.545 06:57:29 -- common/autotest_common.sh@10 -- # set +x 00:17:22.545 [2024-05-12 06:57:29.528845] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:22.545 [2024-05-12 06:57:29.528918] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.545 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.545 [2024-05-12 06:57:29.592166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.806 [2024-05-12 06:57:29.700578] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.806 [2024-05-12 06:57:29.700734] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.806 [2024-05-12 06:57:29.700767] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.806 [2024-05-12 06:57:29.700780] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.806 [2024-05-12 06:57:29.700805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.806 [2024-05-12 06:57:29.924148] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.065 [2024-05-12 06:57:29.956147] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:23.065 [2024-05-12 06:57:29.956360] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.633 06:57:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:23.634 06:57:30 -- common/autotest_common.sh@852 -- # return 0 00:17:23.634 06:57:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:23.634 06:57:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:23.634 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:17:23.634 06:57:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.634 06:57:30 -- target/tls.sh@216 -- # bdevperf_pid=3049492 00:17:23.634 06:57:30 -- target/tls.sh@217 -- # waitforlisten 3049492 /var/tmp/bdevperf.sock 00:17:23.634 06:57:30 -- common/autotest_common.sh@819 -- # '[' -z 3049492 ']' 00:17:23.634 06:57:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.634 06:57:30 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:23.634 06:57:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:23.634 06:57:30 -- target/tls.sh@213 -- # echo '{ 00:17:23.634 "subsystems": [ 00:17:23.634 { 00:17:23.634 "subsystem": "iobuf", 00:17:23.634 "config": [ 00:17:23.634 { 00:17:23.634 "method": "iobuf_set_options", 00:17:23.634 "params": { 00:17:23.634 "small_pool_count": 8192, 00:17:23.634 "large_pool_count": 1024, 00:17:23.634 "small_bufsize": 8192, 00:17:23.634 "large_bufsize": 135168 00:17:23.634 } 00:17:23.634 } 00:17:23.634 ] 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "subsystem": "sock", 00:17:23.634 "config": [ 00:17:23.634 { 00:17:23.634 "method": "sock_impl_set_options", 00:17:23.634 "params": { 00:17:23.634 "impl_name": "posix", 00:17:23.634 "recv_buf_size": 2097152, 00:17:23.634 "send_buf_size": 2097152, 00:17:23.634 "enable_recv_pipe": true, 00:17:23.634 "enable_quickack": false, 00:17:23.634 "enable_placement_id": 0, 00:17:23.634 "enable_zerocopy_send_server": true, 00:17:23.634 "enable_zerocopy_send_client": false, 00:17:23.634 "zerocopy_threshold": 0, 00:17:23.634 "tls_version": 0, 00:17:23.634 "enable_ktls": false 00:17:23.634 } 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "method": "sock_impl_set_options", 00:17:23.634 "params": { 00:17:23.634 "impl_name": "ssl", 00:17:23.634 "recv_buf_size": 4096, 00:17:23.634 "send_buf_size": 4096, 00:17:23.634 "enable_recv_pipe": true, 00:17:23.634 "enable_quickack": false, 00:17:23.634 "enable_placement_id": 0, 00:17:23.634 "enable_zerocopy_send_server": true, 00:17:23.634 "enable_zerocopy_send_client": false, 00:17:23.634 "zerocopy_threshold": 0, 00:17:23.634 "tls_version": 0, 00:17:23.634 "enable_ktls": false 00:17:23.634 } 00:17:23.634 } 00:17:23.634 ] 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "subsystem": "vmd", 00:17:23.634 "config": [] 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "subsystem": "accel", 00:17:23.634 "config": [ 00:17:23.634 { 00:17:23.634 "method": "accel_set_options", 00:17:23.634 "params": { 00:17:23.634 "small_cache_size": 128, 00:17:23.634 "large_cache_size": 16, 00:17:23.634 "task_count": 2048, 00:17:23.634 "sequence_count": 2048, 00:17:23.634 "buf_count": 2048 00:17:23.634 } 00:17:23.634 } 00:17:23.634 ] 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "subsystem": "bdev", 00:17:23.634 "config": [ 00:17:23.634 { 00:17:23.634 "method": "bdev_set_options", 00:17:23.634 "params": { 00:17:23.634 "bdev_io_pool_size": 65535, 00:17:23.634 "bdev_io_cache_size": 256, 00:17:23.634 "bdev_auto_examine": true, 00:17:23.634 "iobuf_small_cache_size": 128, 00:17:23.634 "iobuf_large_cache_size": 16 00:17:23.634 } 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "method": "bdev_raid_set_options", 00:17:23.634 "params": { 00:17:23.634 "process_window_size_kb": 1024 00:17:23.634 } 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "method": "bdev_iscsi_set_options", 00:17:23.634 "params": { 00:17:23.634 "timeout_sec": 30 00:17:23.634 } 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "method": "bdev_nvme_set_options", 00:17:23.634 "params": { 00:17:23.634 "action_on_timeout": "none", 00:17:23.634 "timeout_us": 0, 00:17:23.634 "timeout_admin_us": 0, 00:17:23.634 "keep_alive_timeout_ms": 10000, 00:17:23.634 "transport_retry_count": 4, 00:17:23.634 "arbitration_burst": 0, 00:17:23.634 "low_priority_weight": 0, 00:17:23.634 "medium_priority_weight": 0, 00:17:23.634 "high_priority_weight": 0, 00:17:23.634 "nvme_adminq_poll_period_us": 10000, 00:17:23.634 "nvme_ioq_poll_period_us": 0, 00:17:23.634 "io_queue_requests": 512, 00:17:23.634 "delay_cmd_submit": true, 00:17:23.634 "bdev_retry_count": 3, 00:17:23.634 "transport_ack_timeout": 0, 00:17:23.634 "ctrlr_loss_timeout_sec": 0, 00:17:23.634 "reconnect_delay_sec": 0, 00:17:23.634 "fast_io_fail_timeout_sec": 0, 00:17:23.634 "generate_uuids": false, 00:17:23.634 "transport_tos": 0, 00:17:23.634 "io_path_stat": false, 00:17:23.634 "allow_accel_sequence": false 00:17:23.634 } 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "method": "bdev_nvme_attach_controller", 00:17:23.634 "params": { 00:17:23.634 "name": "TLSTEST", 00:17:23.634 "trtype": "TCP", 00:17:23.634 "adrfam": "IPv4", 00:17:23.634 "traddr": "10.0.0.2", 00:17:23.634 "trsvcid": "4420", 00:17:23.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:23.634 "prchk_reftag": false, 00:17:23.634 "prchk_guard": false, 00:17:23.634 "ctrlr_loss_timeout_sec": 0, 00:17:23.634 "reconnect_delay_sec": 0, 00:17:23.634 "fast_io_fail_timeout_sec": 0, 00:17:23.634 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:23.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:23.634 "hdgst": 06:57:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.634 false, 00:17:23.634 "ddgst": false 00:17:23.634 } 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "method": "bdev_nvme_set_hotplug", 00:17:23.634 "params": { 00:17:23.634 "period_us": 100000, 00:17:23.634 "enable": false 00:17:23.634 } 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "method": "bdev_wait_for_examine" 00:17:23.634 } 00:17:23.634 ] 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "subsystem": "nbd", 00:17:23.634 "config": [] 00:17:23.634 } 00:17:23.634 ] 00:17:23.634 }' 00:17:23.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.634 06:57:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:23.634 06:57:30 -- common/autotest_common.sh@10 -- # set +x 00:17:23.634 [2024-05-12 06:57:30.547117] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:23.634 [2024-05-12 06:57:30.547197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3049492 ] 00:17:23.634 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.634 [2024-05-12 06:57:30.609122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.634 [2024-05-12 06:57:30.716284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.892 [2024-05-12 06:57:30.878530] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:24.458 06:57:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:24.458 06:57:31 -- common/autotest_common.sh@852 -- # return 0 00:17:24.458 06:57:31 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:24.458 Running I/O for 10 seconds... 00:17:36.703 00:17:36.703 Latency(us) 00:17:36.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.703 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:36.703 Verification LBA range: start 0x0 length 0x2000 00:17:36.703 TLSTESTn1 : 10.05 1249.01 4.88 0.00 0.00 102252.18 16214.09 110294.66 00:17:36.703 =================================================================================================================== 00:17:36.703 Total : 1249.01 4.88 0.00 0.00 102252.18 16214.09 110294.66 00:17:36.703 0 00:17:36.703 06:57:41 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.703 06:57:41 -- target/tls.sh@223 -- # killprocess 3049492 00:17:36.703 06:57:41 -- common/autotest_common.sh@926 -- # '[' -z 3049492 ']' 00:17:36.703 06:57:41 -- common/autotest_common.sh@930 -- # kill -0 3049492 00:17:36.703 06:57:41 -- common/autotest_common.sh@931 -- # uname 00:17:36.703 06:57:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:36.703 06:57:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3049492 00:17:36.703 06:57:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:36.703 06:57:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:36.703 06:57:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3049492' 00:17:36.703 killing process with pid 3049492 00:17:36.703 06:57:41 -- common/autotest_common.sh@945 -- # kill 3049492 00:17:36.703 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.703 00:17:36.703 Latency(us) 00:17:36.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.703 =================================================================================================================== 00:17:36.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.703 06:57:41 -- common/autotest_common.sh@950 -- # wait 3049492 00:17:36.703 06:57:41 -- target/tls.sh@224 -- # killprocess 3049379 00:17:36.703 06:57:41 -- common/autotest_common.sh@926 -- # '[' -z 3049379 ']' 00:17:36.703 06:57:41 -- common/autotest_common.sh@930 -- # kill -0 3049379 00:17:36.703 06:57:41 -- common/autotest_common.sh@931 -- # uname 00:17:36.703 06:57:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:36.703 06:57:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3049379 00:17:36.703 06:57:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:36.703 06:57:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:36.703 06:57:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3049379' 00:17:36.703 killing process with pid 3049379 00:17:36.703 06:57:41 -- common/autotest_common.sh@945 -- # kill 3049379 00:17:36.703 06:57:41 -- common/autotest_common.sh@950 -- # wait 3049379 00:17:36.703 06:57:42 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:36.703 06:57:42 -- target/tls.sh@227 -- # cleanup 00:17:36.703 06:57:42 -- target/tls.sh@15 -- # process_shm --id 0 00:17:36.703 06:57:42 -- common/autotest_common.sh@796 -- # type=--id 00:17:36.703 06:57:42 -- common/autotest_common.sh@797 -- # id=0 00:17:36.703 06:57:42 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:36.703 06:57:42 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:36.703 06:57:42 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:36.703 06:57:42 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:36.703 06:57:42 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:36.703 06:57:42 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:36.703 nvmf_trace.0 00:17:36.703 06:57:42 -- common/autotest_common.sh@811 -- # return 0 00:17:36.703 06:57:42 -- target/tls.sh@16 -- # killprocess 3049492 00:17:36.703 06:57:42 -- common/autotest_common.sh@926 -- # '[' -z 3049492 ']' 00:17:36.703 06:57:42 -- common/autotest_common.sh@930 -- # kill -0 3049492 00:17:36.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3049492) - No such process 00:17:36.703 06:57:42 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3049492 is not found' 00:17:36.703 Process with pid 3049492 is not found 00:17:36.703 06:57:42 -- target/tls.sh@17 -- # nvmftestfini 00:17:36.703 06:57:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:36.703 06:57:42 -- nvmf/common.sh@116 -- # sync 00:17:36.703 06:57:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:36.703 06:57:42 -- nvmf/common.sh@119 -- # set +e 00:17:36.703 06:57:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:36.703 06:57:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:36.703 rmmod nvme_tcp 00:17:36.703 rmmod nvme_fabrics 00:17:36.703 rmmod nvme_keyring 00:17:36.703 06:57:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:36.703 06:57:42 -- nvmf/common.sh@123 -- # set -e 00:17:36.703 06:57:42 -- nvmf/common.sh@124 -- # return 0 00:17:36.703 06:57:42 -- nvmf/common.sh@477 -- # '[' -n 3049379 ']' 00:17:36.704 06:57:42 -- nvmf/common.sh@478 -- # killprocess 3049379 00:17:36.704 06:57:42 -- common/autotest_common.sh@926 -- # '[' -z 3049379 ']' 00:17:36.704 06:57:42 -- common/autotest_common.sh@930 -- # kill -0 3049379 00:17:36.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3049379) - No such process 00:17:36.704 06:57:42 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3049379 is not found' 00:17:36.704 Process with pid 3049379 is not found 00:17:36.704 06:57:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:36.704 06:57:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:36.704 06:57:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:36.704 06:57:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:36.704 06:57:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:36.704 06:57:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.704 06:57:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.704 06:57:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.640 06:57:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:37.640 06:57:44 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:37.640 00:17:37.640 real 1m14.799s 00:17:37.640 user 1m52.217s 00:17:37.640 sys 0m24.511s 00:17:37.640 06:57:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.640 06:57:44 -- common/autotest_common.sh@10 -- # set +x 00:17:37.640 ************************************ 00:17:37.640 END TEST nvmf_tls 00:17:37.640 ************************************ 00:17:37.640 06:57:44 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:37.640 06:57:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:37.640 06:57:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:37.640 06:57:44 -- common/autotest_common.sh@10 -- # set +x 00:17:37.640 ************************************ 00:17:37.640 START TEST nvmf_fips 00:17:37.640 ************************************ 00:17:37.640 06:57:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:37.640 * Looking for test storage... 00:17:37.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:17:37.640 06:57:44 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.640 06:57:44 -- nvmf/common.sh@7 -- # uname -s 00:17:37.640 06:57:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.640 06:57:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.640 06:57:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.640 06:57:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.640 06:57:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.640 06:57:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.640 06:57:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.640 06:57:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.640 06:57:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.640 06:57:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.640 06:57:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:37.640 06:57:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:37.640 06:57:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.640 06:57:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.640 06:57:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.640 06:57:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.640 06:57:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.640 06:57:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.640 06:57:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.640 06:57:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.640 06:57:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.640 06:57:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.640 06:57:44 -- paths/export.sh@5 -- # export PATH 00:17:37.640 06:57:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.640 06:57:44 -- nvmf/common.sh@46 -- # : 0 00:17:37.640 06:57:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:37.640 06:57:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:37.640 06:57:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:37.640 06:57:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.640 06:57:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.640 06:57:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:37.640 06:57:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:37.640 06:57:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:37.640 06:57:44 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.640 06:57:44 -- fips/fips.sh@89 -- # check_openssl_version 00:17:37.640 06:57:44 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:37.640 06:57:44 -- fips/fips.sh@85 -- # openssl version 00:17:37.640 06:57:44 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:37.640 06:57:44 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:37.640 06:57:44 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:37.640 06:57:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:37.640 06:57:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:37.640 06:57:44 -- scripts/common.sh@335 -- # IFS=.-: 00:17:37.640 06:57:44 -- scripts/common.sh@335 -- # read -ra ver1 00:17:37.640 06:57:44 -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.640 06:57:44 -- scripts/common.sh@336 -- # read -ra ver2 00:17:37.640 06:57:44 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:37.640 06:57:44 -- scripts/common.sh@339 -- # ver1_l=3 00:17:37.640 06:57:44 -- scripts/common.sh@340 -- # ver2_l=3 00:17:37.640 06:57:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:37.640 06:57:44 -- scripts/common.sh@343 -- # case "$op" in 00:17:37.640 06:57:44 -- scripts/common.sh@347 -- # : 1 00:17:37.640 06:57:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:37.640 06:57:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.640 06:57:44 -- scripts/common.sh@364 -- # decimal 3 00:17:37.640 06:57:44 -- scripts/common.sh@352 -- # local d=3 00:17:37.640 06:57:44 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:37.641 06:57:44 -- scripts/common.sh@354 -- # echo 3 00:17:37.641 06:57:44 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:37.641 06:57:44 -- scripts/common.sh@365 -- # decimal 3 00:17:37.641 06:57:44 -- scripts/common.sh@352 -- # local d=3 00:17:37.641 06:57:44 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:37.641 06:57:44 -- scripts/common.sh@354 -- # echo 3 00:17:37.641 06:57:44 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:37.641 06:57:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:37.641 06:57:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:37.641 06:57:44 -- scripts/common.sh@363 -- # (( v++ )) 00:17:37.641 06:57:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.641 06:57:44 -- scripts/common.sh@364 -- # decimal 0 00:17:37.641 06:57:44 -- scripts/common.sh@352 -- # local d=0 00:17:37.641 06:57:44 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:37.641 06:57:44 -- scripts/common.sh@354 -- # echo 0 00:17:37.641 06:57:44 -- scripts/common.sh@364 -- # ver1[v]=0 00:17:37.641 06:57:44 -- scripts/common.sh@365 -- # decimal 0 00:17:37.641 06:57:44 -- scripts/common.sh@352 -- # local d=0 00:17:37.641 06:57:44 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:37.641 06:57:44 -- scripts/common.sh@354 -- # echo 0 00:17:37.641 06:57:44 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:37.641 06:57:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:37.641 06:57:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:37.641 06:57:44 -- scripts/common.sh@363 -- # (( v++ )) 00:17:37.641 06:57:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.641 06:57:44 -- scripts/common.sh@364 -- # decimal 9 00:17:37.641 06:57:44 -- scripts/common.sh@352 -- # local d=9 00:17:37.641 06:57:44 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:37.641 06:57:44 -- scripts/common.sh@354 -- # echo 9 00:17:37.641 06:57:44 -- scripts/common.sh@364 -- # ver1[v]=9 00:17:37.641 06:57:44 -- scripts/common.sh@365 -- # decimal 0 00:17:37.641 06:57:44 -- scripts/common.sh@352 -- # local d=0 00:17:37.641 06:57:44 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:37.641 06:57:44 -- scripts/common.sh@354 -- # echo 0 00:17:37.641 06:57:44 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:37.641 06:57:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:37.641 06:57:44 -- scripts/common.sh@366 -- # return 0 00:17:37.641 06:57:44 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:37.641 06:57:44 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:37.641 06:57:44 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:37.641 06:57:44 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:37.641 06:57:44 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:37.641 06:57:44 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:37.641 06:57:44 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:37.641 06:57:44 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:37.641 06:57:44 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:37.641 06:57:44 -- fips/fips.sh@114 -- # build_openssl_config 00:17:37.641 06:57:44 -- fips/fips.sh@37 -- # cat 00:17:37.641 06:57:44 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:37.641 06:57:44 -- fips/fips.sh@58 -- # cat - 00:17:37.641 06:57:44 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:37.641 06:57:44 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:37.641 06:57:44 -- fips/fips.sh@117 -- # mapfile -t providers 00:17:37.641 06:57:44 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:17:37.641 06:57:44 -- fips/fips.sh@117 -- # openssl list -providers 00:17:37.641 06:57:44 -- fips/fips.sh@117 -- # grep name 00:17:37.641 06:57:44 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:37.641 06:57:44 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:37.641 06:57:44 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:37.641 06:57:44 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:37.641 06:57:44 -- fips/fips.sh@128 -- # : 00:17:37.641 06:57:44 -- common/autotest_common.sh@640 -- # local es=0 00:17:37.641 06:57:44 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:37.641 06:57:44 -- common/autotest_common.sh@628 -- # local arg=openssl 00:17:37.641 06:57:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:37.641 06:57:44 -- common/autotest_common.sh@632 -- # type -t openssl 00:17:37.641 06:57:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:37.641 06:57:44 -- common/autotest_common.sh@634 -- # type -P openssl 00:17:37.641 06:57:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:37.641 06:57:44 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:17:37.641 06:57:44 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:17:37.641 06:57:44 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:17:37.641 Error setting digest 00:17:37.641 00A235FC597F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:37.641 00A235FC597F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:37.641 06:57:44 -- common/autotest_common.sh@643 -- # es=1 00:17:37.641 06:57:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:37.641 06:57:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:37.641 06:57:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:37.641 06:57:44 -- fips/fips.sh@131 -- # nvmftestinit 00:17:37.641 06:57:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:37.641 06:57:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.641 06:57:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:37.641 06:57:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:37.641 06:57:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:37.641 06:57:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.641 06:57:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.641 06:57:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.641 06:57:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:37.641 06:57:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:37.641 06:57:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:37.641 06:57:44 -- common/autotest_common.sh@10 -- # set +x 00:17:39.542 06:57:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:39.542 06:57:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:39.542 06:57:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:39.542 06:57:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:39.542 06:57:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:39.542 06:57:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:39.542 06:57:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:39.542 06:57:46 -- nvmf/common.sh@294 -- # net_devs=() 00:17:39.542 06:57:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:39.542 06:57:46 -- nvmf/common.sh@295 -- # e810=() 00:17:39.542 06:57:46 -- nvmf/common.sh@295 -- # local -ga e810 00:17:39.542 06:57:46 -- nvmf/common.sh@296 -- # x722=() 00:17:39.542 06:57:46 -- nvmf/common.sh@296 -- # local -ga x722 00:17:39.542 06:57:46 -- nvmf/common.sh@297 -- # mlx=() 00:17:39.542 06:57:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:39.542 06:57:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.542 06:57:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.542 06:57:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.542 06:57:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.542 06:57:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.542 06:57:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.542 06:57:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.542 06:57:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.542 06:57:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.542 06:57:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.542 06:57:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.542 06:57:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:39.542 06:57:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:39.542 06:57:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:39.542 06:57:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:39.542 06:57:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:39.542 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:39.542 06:57:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:39.542 06:57:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:39.542 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:39.542 06:57:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:39.542 06:57:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:39.542 06:57:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.542 06:57:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:39.542 06:57:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.542 06:57:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:39.542 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:39.542 06:57:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.542 06:57:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:39.542 06:57:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.542 06:57:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:39.542 06:57:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.542 06:57:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:39.542 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:39.542 06:57:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.542 06:57:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:39.542 06:57:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:39.542 06:57:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:39.542 06:57:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:39.542 06:57:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.542 06:57:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.542 06:57:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.542 06:57:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:39.542 06:57:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.542 06:57:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.542 06:57:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:39.542 06:57:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.542 06:57:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.542 06:57:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:39.542 06:57:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:39.542 06:57:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.542 06:57:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.800 06:57:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.800 06:57:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.800 06:57:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:39.800 06:57:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.800 06:57:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.800 06:57:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.800 06:57:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:39.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:17:39.800 00:17:39.800 --- 10.0.0.2 ping statistics --- 00:17:39.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.800 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:17:39.800 06:57:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:17:39.800 00:17:39.800 --- 10.0.0.1 ping statistics --- 00:17:39.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.800 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:39.800 06:57:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.800 06:57:46 -- nvmf/common.sh@410 -- # return 0 00:17:39.800 06:57:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:39.800 06:57:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.800 06:57:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:39.800 06:57:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:39.800 06:57:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.800 06:57:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:39.800 06:57:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:39.800 06:57:46 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:39.800 06:57:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:39.800 06:57:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:39.800 06:57:46 -- common/autotest_common.sh@10 -- # set +x 00:17:39.800 06:57:46 -- nvmf/common.sh@469 -- # nvmfpid=3052950 00:17:39.800 06:57:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:39.800 06:57:46 -- nvmf/common.sh@470 -- # waitforlisten 3052950 00:17:39.800 06:57:46 -- common/autotest_common.sh@819 -- # '[' -z 3052950 ']' 00:17:39.800 06:57:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.800 06:57:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:39.800 06:57:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.800 06:57:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:39.800 06:57:46 -- common/autotest_common.sh@10 -- # set +x 00:17:39.800 [2024-05-12 06:57:46.853632] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:39.800 [2024-05-12 06:57:46.853739] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.800 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.800 [2024-05-12 06:57:46.916015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.057 [2024-05-12 06:57:47.019600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:40.057 [2024-05-12 06:57:47.019770] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.057 [2024-05-12 06:57:47.019789] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.057 [2024-05-12 06:57:47.019816] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.057 [2024-05-12 06:57:47.019843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.995 06:57:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:40.995 06:57:47 -- common/autotest_common.sh@852 -- # return 0 00:17:40.995 06:57:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:40.995 06:57:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:40.995 06:57:47 -- common/autotest_common.sh@10 -- # set +x 00:17:40.995 06:57:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.995 06:57:47 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:40.995 06:57:47 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:40.995 06:57:47 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:40.995 06:57:47 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:40.995 06:57:47 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:40.995 06:57:47 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:40.995 06:57:47 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:40.995 06:57:47 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.995 [2024-05-12 06:57:48.060935] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.995 [2024-05-12 06:57:48.076925] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.995 [2024-05-12 06:57:48.077126] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.995 malloc0 00:17:41.255 06:57:48 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:41.255 06:57:48 -- fips/fips.sh@148 -- # bdevperf_pid=3053117 00:17:41.255 06:57:48 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:41.255 06:57:48 -- fips/fips.sh@149 -- # waitforlisten 3053117 /var/tmp/bdevperf.sock 00:17:41.255 06:57:48 -- common/autotest_common.sh@819 -- # '[' -z 3053117 ']' 00:17:41.255 06:57:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.255 06:57:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:41.255 06:57:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.255 06:57:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:41.255 06:57:48 -- common/autotest_common.sh@10 -- # set +x 00:17:41.255 [2024-05-12 06:57:48.197113] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:41.255 [2024-05-12 06:57:48.197213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053117 ] 00:17:41.255 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.255 [2024-05-12 06:57:48.255160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.255 [2024-05-12 06:57:48.360859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.191 06:57:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:42.191 06:57:49 -- common/autotest_common.sh@852 -- # return 0 00:17:42.191 06:57:49 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:42.191 [2024-05-12 06:57:49.311180] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:42.451 TLSTESTn1 00:17:42.451 06:57:49 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:42.451 Running I/O for 10 seconds... 00:17:54.661 00:17:54.661 Latency(us) 00:17:54.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.661 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:54.661 Verification LBA range: start 0x0 length 0x2000 00:17:54.661 TLSTESTn1 : 10.03 1936.84 7.57 0.00 0.00 65989.00 4805.97 74953.77 00:17:54.661 =================================================================================================================== 00:17:54.661 Total : 1936.84 7.57 0.00 0.00 65989.00 4805.97 74953.77 00:17:54.661 0 00:17:54.661 06:57:59 -- fips/fips.sh@1 -- # cleanup 00:17:54.661 06:57:59 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:54.661 06:57:59 -- common/autotest_common.sh@796 -- # type=--id 00:17:54.661 06:57:59 -- common/autotest_common.sh@797 -- # id=0 00:17:54.661 06:57:59 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:54.661 06:57:59 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:54.661 06:57:59 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:54.661 06:57:59 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:54.661 06:57:59 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:54.661 06:57:59 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:54.662 nvmf_trace.0 00:17:54.662 06:57:59 -- common/autotest_common.sh@811 -- # return 0 00:17:54.662 06:57:59 -- fips/fips.sh@16 -- # killprocess 3053117 00:17:54.662 06:57:59 -- common/autotest_common.sh@926 -- # '[' -z 3053117 ']' 00:17:54.662 06:57:59 -- common/autotest_common.sh@930 -- # kill -0 3053117 00:17:54.662 06:57:59 -- common/autotest_common.sh@931 -- # uname 00:17:54.662 06:57:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:54.662 06:57:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3053117 00:17:54.662 06:57:59 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:54.662 06:57:59 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:54.662 06:57:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3053117' 00:17:54.662 killing process with pid 3053117 00:17:54.662 06:57:59 -- common/autotest_common.sh@945 -- # kill 3053117 00:17:54.662 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.662 00:17:54.662 Latency(us) 00:17:54.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.662 =================================================================================================================== 00:17:54.662 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.662 06:57:59 -- common/autotest_common.sh@950 -- # wait 3053117 00:17:54.662 06:57:59 -- fips/fips.sh@17 -- # nvmftestfini 00:17:54.662 06:57:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:54.662 06:57:59 -- nvmf/common.sh@116 -- # sync 00:17:54.662 06:57:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:54.662 06:57:59 -- nvmf/common.sh@119 -- # set +e 00:17:54.662 06:57:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:54.662 06:57:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:54.662 rmmod nvme_tcp 00:17:54.662 rmmod nvme_fabrics 00:17:54.662 rmmod nvme_keyring 00:17:54.662 06:57:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:54.662 06:57:59 -- nvmf/common.sh@123 -- # set -e 00:17:54.662 06:57:59 -- nvmf/common.sh@124 -- # return 0 00:17:54.662 06:57:59 -- nvmf/common.sh@477 -- # '[' -n 3052950 ']' 00:17:54.662 06:57:59 -- nvmf/common.sh@478 -- # killprocess 3052950 00:17:54.662 06:57:59 -- common/autotest_common.sh@926 -- # '[' -z 3052950 ']' 00:17:54.662 06:57:59 -- common/autotest_common.sh@930 -- # kill -0 3052950 00:17:54.662 06:57:59 -- common/autotest_common.sh@931 -- # uname 00:17:54.662 06:57:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:54.662 06:57:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3052950 00:17:54.662 06:57:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:54.662 06:57:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:54.662 06:57:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3052950' 00:17:54.662 killing process with pid 3052950 00:17:54.662 06:57:59 -- common/autotest_common.sh@945 -- # kill 3052950 00:17:54.662 06:57:59 -- common/autotest_common.sh@950 -- # wait 3052950 00:17:54.662 06:58:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:54.662 06:58:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:54.662 06:58:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:54.662 06:58:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.662 06:58:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:54.662 06:58:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.662 06:58:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.662 06:58:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.229 06:58:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:55.229 06:58:02 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:55.229 00:17:55.229 real 0m17.823s 00:17:55.229 user 0m22.287s 00:17:55.229 sys 0m6.831s 00:17:55.229 06:58:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.229 06:58:02 -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 ************************************ 00:17:55.229 END TEST nvmf_fips 00:17:55.229 ************************************ 00:17:55.229 06:58:02 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:55.229 06:58:02 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:55.229 06:58:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:55.229 06:58:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:55.229 06:58:02 -- common/autotest_common.sh@10 -- # set +x 00:17:55.229 ************************************ 00:17:55.229 START TEST nvmf_fuzz 00:17:55.229 ************************************ 00:17:55.229 06:58:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:55.486 * Looking for test storage... 00:17:55.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.486 06:58:02 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.486 06:58:02 -- nvmf/common.sh@7 -- # uname -s 00:17:55.486 06:58:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.486 06:58:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.486 06:58:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.486 06:58:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.486 06:58:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.486 06:58:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.486 06:58:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.486 06:58:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.486 06:58:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.486 06:58:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.486 06:58:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.486 06:58:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.486 06:58:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.486 06:58:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.486 06:58:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.486 06:58:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.486 06:58:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.486 06:58:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.486 06:58:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.487 06:58:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.487 06:58:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.487 06:58:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.487 06:58:02 -- paths/export.sh@5 -- # export PATH 00:17:55.487 06:58:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.487 06:58:02 -- nvmf/common.sh@46 -- # : 0 00:17:55.487 06:58:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:55.487 06:58:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:55.487 06:58:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:55.487 06:58:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.487 06:58:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.487 06:58:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:55.487 06:58:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:55.487 06:58:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:55.487 06:58:02 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:55.487 06:58:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:55.487 06:58:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.487 06:58:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:55.487 06:58:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:55.487 06:58:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:55.487 06:58:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.487 06:58:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.487 06:58:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.487 06:58:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:55.487 06:58:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:55.487 06:58:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:55.487 06:58:02 -- common/autotest_common.sh@10 -- # set +x 00:17:57.386 06:58:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:57.386 06:58:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:57.386 06:58:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:57.386 06:58:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:57.386 06:58:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:57.386 06:58:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:57.386 06:58:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:57.386 06:58:04 -- nvmf/common.sh@294 -- # net_devs=() 00:17:57.386 06:58:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:57.386 06:58:04 -- nvmf/common.sh@295 -- # e810=() 00:17:57.386 06:58:04 -- nvmf/common.sh@295 -- # local -ga e810 00:17:57.386 06:58:04 -- nvmf/common.sh@296 -- # x722=() 00:17:57.386 06:58:04 -- nvmf/common.sh@296 -- # local -ga x722 00:17:57.386 06:58:04 -- nvmf/common.sh@297 -- # mlx=() 00:17:57.386 06:58:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:57.386 06:58:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.386 06:58:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.386 06:58:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.386 06:58:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.386 06:58:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.386 06:58:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.386 06:58:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.386 06:58:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.386 06:58:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.386 06:58:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.386 06:58:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.386 06:58:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:57.386 06:58:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:57.386 06:58:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:57.386 06:58:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:57.386 06:58:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:57.386 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:57.386 06:58:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:57.386 06:58:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:57.386 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:57.386 06:58:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:57.386 06:58:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:57.386 06:58:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.386 06:58:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:57.386 06:58:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.386 06:58:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:57.386 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:57.386 06:58:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.386 06:58:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:57.386 06:58:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.386 06:58:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:57.386 06:58:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.386 06:58:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:57.386 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:57.386 06:58:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.386 06:58:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:57.386 06:58:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:57.386 06:58:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:57.386 06:58:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.386 06:58:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.386 06:58:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.386 06:58:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:57.386 06:58:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.386 06:58:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.386 06:58:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:57.386 06:58:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.386 06:58:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.386 06:58:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:57.386 06:58:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:57.386 06:58:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.386 06:58:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.386 06:58:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.386 06:58:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.386 06:58:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:57.386 06:58:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.386 06:58:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.386 06:58:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.386 06:58:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:57.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:17:57.386 00:17:57.386 --- 10.0.0.2 ping statistics --- 00:17:57.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.386 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:17:57.386 06:58:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:17:57.386 00:17:57.386 --- 10.0.0.1 ping statistics --- 00:17:57.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.386 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:17:57.386 06:58:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.386 06:58:04 -- nvmf/common.sh@410 -- # return 0 00:17:57.386 06:58:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:57.386 06:58:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.386 06:58:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:57.386 06:58:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.386 06:58:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:57.386 06:58:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:57.386 06:58:04 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3056435 00:17:57.386 06:58:04 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:57.386 06:58:04 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:57.386 06:58:04 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3056435 00:17:57.386 06:58:04 -- common/autotest_common.sh@819 -- # '[' -z 3056435 ']' 00:17:57.386 06:58:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.386 06:58:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:57.386 06:58:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.387 06:58:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:57.387 06:58:04 -- common/autotest_common.sh@10 -- # set +x 00:17:58.326 06:58:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:58.326 06:58:05 -- common/autotest_common.sh@852 -- # return 0 00:17:58.326 06:58:05 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:58.326 06:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.326 06:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:58.587 06:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.587 06:58:05 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:58.587 06:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.587 06:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:58.587 Malloc0 00:17:58.587 06:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.587 06:58:05 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:58.587 06:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.587 06:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:58.587 06:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.587 06:58:05 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:58.587 06:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.587 06:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:58.587 06:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.587 06:58:05 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.587 06:58:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.587 06:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:58.587 06:58:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.587 06:58:05 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:58.587 06:58:05 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:30.717 Fuzzing completed. Shutting down the fuzz application 00:18:30.717 00:18:30.717 Dumping successful admin opcodes: 00:18:30.717 8, 9, 10, 24, 00:18:30.717 Dumping successful io opcodes: 00:18:30.717 0, 9, 00:18:30.717 NS: 0x200003aeff00 I/O qp, Total commands completed: 478087, total successful commands: 2772, random_seed: 558083584 00:18:30.717 NS: 0x200003aeff00 admin qp, Total commands completed: 59856, total successful commands: 475, random_seed: 4259807104 00:18:30.717 06:58:36 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:30.717 Fuzzing completed. Shutting down the fuzz application 00:18:30.717 00:18:30.717 Dumping successful admin opcodes: 00:18:30.717 24, 00:18:30.717 Dumping successful io opcodes: 00:18:30.717 00:18:30.717 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3124497526 00:18:30.717 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3124631826 00:18:30.717 06:58:37 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:30.717 06:58:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:30.717 06:58:37 -- common/autotest_common.sh@10 -- # set +x 00:18:30.717 06:58:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:30.717 06:58:37 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:30.717 06:58:37 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:30.717 06:58:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:30.717 06:58:37 -- nvmf/common.sh@116 -- # sync 00:18:30.717 06:58:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:30.717 06:58:37 -- nvmf/common.sh@119 -- # set +e 00:18:30.717 06:58:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:30.717 06:58:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:30.717 rmmod nvme_tcp 00:18:30.717 rmmod nvme_fabrics 00:18:30.717 rmmod nvme_keyring 00:18:30.717 06:58:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:30.717 06:58:37 -- nvmf/common.sh@123 -- # set -e 00:18:30.717 06:58:37 -- nvmf/common.sh@124 -- # return 0 00:18:30.717 06:58:37 -- nvmf/common.sh@477 -- # '[' -n 3056435 ']' 00:18:30.717 06:58:37 -- nvmf/common.sh@478 -- # killprocess 3056435 00:18:30.717 06:58:37 -- common/autotest_common.sh@926 -- # '[' -z 3056435 ']' 00:18:30.717 06:58:37 -- common/autotest_common.sh@930 -- # kill -0 3056435 00:18:30.717 06:58:37 -- common/autotest_common.sh@931 -- # uname 00:18:30.717 06:58:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:30.717 06:58:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3056435 00:18:30.717 06:58:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:30.717 06:58:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:30.717 06:58:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3056435' 00:18:30.717 killing process with pid 3056435 00:18:30.717 06:58:37 -- common/autotest_common.sh@945 -- # kill 3056435 00:18:30.717 06:58:37 -- common/autotest_common.sh@950 -- # wait 3056435 00:18:30.975 06:58:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:30.975 06:58:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:30.975 06:58:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:30.975 06:58:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.975 06:58:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:30.975 06:58:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.975 06:58:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.975 06:58:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.881 06:58:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:32.881 06:58:40 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:18:33.153 00:18:33.153 real 0m37.703s 00:18:33.153 user 0m52.506s 00:18:33.153 sys 0m14.786s 00:18:33.153 06:58:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:33.153 06:58:40 -- common/autotest_common.sh@10 -- # set +x 00:18:33.153 ************************************ 00:18:33.153 END TEST nvmf_fuzz 00:18:33.153 ************************************ 00:18:33.153 06:58:40 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:33.153 06:58:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:33.153 06:58:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:33.153 06:58:40 -- common/autotest_common.sh@10 -- # set +x 00:18:33.153 ************************************ 00:18:33.153 START TEST nvmf_multiconnection 00:18:33.153 ************************************ 00:18:33.153 06:58:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:33.153 * Looking for test storage... 00:18:33.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.153 06:58:40 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.153 06:58:40 -- nvmf/common.sh@7 -- # uname -s 00:18:33.153 06:58:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.153 06:58:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.153 06:58:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.153 06:58:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.153 06:58:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.153 06:58:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.153 06:58:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.153 06:58:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.153 06:58:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.153 06:58:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.153 06:58:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.153 06:58:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.153 06:58:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.153 06:58:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.153 06:58:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.153 06:58:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.153 06:58:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.153 06:58:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.153 06:58:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.153 06:58:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.153 06:58:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.153 06:58:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.153 06:58:40 -- paths/export.sh@5 -- # export PATH 00:18:33.153 06:58:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.153 06:58:40 -- nvmf/common.sh@46 -- # : 0 00:18:33.153 06:58:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:33.153 06:58:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:33.153 06:58:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:33.153 06:58:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.153 06:58:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.153 06:58:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:33.153 06:58:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:33.153 06:58:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:33.153 06:58:40 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:33.153 06:58:40 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:33.153 06:58:40 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:33.153 06:58:40 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:33.153 06:58:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:33.153 06:58:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.153 06:58:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:33.153 06:58:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:33.153 06:58:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:33.153 06:58:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.153 06:58:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.153 06:58:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.153 06:58:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:33.153 06:58:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:33.153 06:58:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:33.153 06:58:40 -- common/autotest_common.sh@10 -- # set +x 00:18:35.055 06:58:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:35.055 06:58:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:35.055 06:58:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:35.055 06:58:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:35.055 06:58:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:35.055 06:58:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:35.055 06:58:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:35.055 06:58:42 -- nvmf/common.sh@294 -- # net_devs=() 00:18:35.055 06:58:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:35.055 06:58:42 -- nvmf/common.sh@295 -- # e810=() 00:18:35.055 06:58:42 -- nvmf/common.sh@295 -- # local -ga e810 00:18:35.055 06:58:42 -- nvmf/common.sh@296 -- # x722=() 00:18:35.055 06:58:42 -- nvmf/common.sh@296 -- # local -ga x722 00:18:35.055 06:58:42 -- nvmf/common.sh@297 -- # mlx=() 00:18:35.055 06:58:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:35.055 06:58:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:35.055 06:58:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:35.055 06:58:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:35.055 06:58:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:35.055 06:58:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:35.055 06:58:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:35.055 06:58:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:35.055 06:58:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:35.055 06:58:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:35.055 06:58:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:35.055 06:58:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:35.055 06:58:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:35.055 06:58:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:35.055 06:58:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:35.055 06:58:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:35.055 06:58:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:35.055 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:35.055 06:58:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:35.055 06:58:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:35.055 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:35.055 06:58:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:35.055 06:58:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:35.055 06:58:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.055 06:58:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:35.055 06:58:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.055 06:58:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:35.055 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:35.055 06:58:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.055 06:58:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:35.055 06:58:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.055 06:58:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:35.055 06:58:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.055 06:58:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:35.055 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:35.055 06:58:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.055 06:58:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:35.055 06:58:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:35.055 06:58:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:35.055 06:58:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:35.055 06:58:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.055 06:58:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.055 06:58:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:35.055 06:58:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:35.055 06:58:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:35.055 06:58:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:35.055 06:58:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:35.055 06:58:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:35.055 06:58:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.055 06:58:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:35.055 06:58:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:35.055 06:58:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:35.055 06:58:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:35.055 06:58:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:35.055 06:58:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:35.055 06:58:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:35.055 06:58:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:35.313 06:58:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:35.313 06:58:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:35.313 06:58:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:35.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:18:35.313 00:18:35.313 --- 10.0.0.2 ping statistics --- 00:18:35.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.313 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:18:35.313 06:58:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:35.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:18:35.313 00:18:35.313 --- 10.0.0.1 ping statistics --- 00:18:35.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.313 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:35.313 06:58:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.313 06:58:42 -- nvmf/common.sh@410 -- # return 0 00:18:35.313 06:58:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:35.313 06:58:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.313 06:58:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:35.313 06:58:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:35.313 06:58:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.313 06:58:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:35.313 06:58:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:35.314 06:58:42 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:35.314 06:58:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:35.314 06:58:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:35.314 06:58:42 -- common/autotest_common.sh@10 -- # set +x 00:18:35.314 06:58:42 -- nvmf/common.sh@469 -- # nvmfpid=3062428 00:18:35.314 06:58:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:35.314 06:58:42 -- nvmf/common.sh@470 -- # waitforlisten 3062428 00:18:35.314 06:58:42 -- common/autotest_common.sh@819 -- # '[' -z 3062428 ']' 00:18:35.314 06:58:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.314 06:58:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:35.314 06:58:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.314 06:58:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:35.314 06:58:42 -- common/autotest_common.sh@10 -- # set +x 00:18:35.314 [2024-05-12 06:58:42.298569] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:35.314 [2024-05-12 06:58:42.298642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.314 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.314 [2024-05-12 06:58:42.366382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:35.573 [2024-05-12 06:58:42.482794] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:35.573 [2024-05-12 06:58:42.482962] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.573 [2024-05-12 06:58:42.482982] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.573 [2024-05-12 06:58:42.482996] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.573 [2024-05-12 06:58:42.483119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.573 [2024-05-12 06:58:42.483186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.573 [2024-05-12 06:58:42.483283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:35.573 [2024-05-12 06:58:42.483285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.142 06:58:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:36.142 06:58:43 -- common/autotest_common.sh@852 -- # return 0 00:18:36.142 06:58:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:36.402 06:58:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:36.402 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.402 06:58:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.402 06:58:43 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:36.402 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.402 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.402 [2024-05-12 06:58:43.301335] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.402 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.402 06:58:43 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:36.402 06:58:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.402 06:58:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:36.402 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.402 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.402 Malloc1 00:18:36.402 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.402 06:58:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:36.402 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.402 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.402 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.402 06:58:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:36.402 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.402 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.402 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.402 06:58:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:36.402 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 [2024-05-12 06:58:43.358846] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.403 06:58:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 Malloc2 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.403 06:58:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 Malloc3 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.403 06:58:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 Malloc4 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.403 06:58:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.403 Malloc5 00:18:36.403 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.403 06:58:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:36.403 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.403 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.664 06:58:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 Malloc6 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.664 06:58:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 Malloc7 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.664 06:58:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 Malloc8 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.664 06:58:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 Malloc9 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.664 06:58:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 Malloc10 00:18:36.664 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.664 06:58:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:36.664 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.664 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.925 06:58:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:36.925 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.925 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.925 06:58:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:36.925 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.925 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.925 06:58:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.925 06:58:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:36.925 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.925 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 Malloc11 00:18:36.925 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.925 06:58:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:36.925 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.925 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.925 06:58:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:36.925 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.925 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.925 06:58:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:36.925 06:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:36.925 06:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 06:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:36.925 06:58:43 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:36.925 06:58:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.925 06:58:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:37.492 06:58:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:37.493 06:58:44 -- common/autotest_common.sh@1177 -- # local i=0 00:18:37.493 06:58:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:37.493 06:58:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:37.493 06:58:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:39.398 06:58:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:39.398 06:58:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:39.398 06:58:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:18:39.398 06:58:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:39.398 06:58:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:39.398 06:58:46 -- common/autotest_common.sh@1187 -- # return 0 00:18:39.398 06:58:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.398 06:58:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:39.969 06:58:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:39.969 06:58:47 -- common/autotest_common.sh@1177 -- # local i=0 00:18:39.969 06:58:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:39.969 06:58:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:39.969 06:58:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:41.954 06:58:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:42.213 06:58:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:42.213 06:58:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:18:42.213 06:58:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:42.213 06:58:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.213 06:58:49 -- common/autotest_common.sh@1187 -- # return 0 00:18:42.213 06:58:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.213 06:58:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:42.786 06:58:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:42.786 06:58:49 -- common/autotest_common.sh@1177 -- # local i=0 00:18:42.786 06:58:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:42.786 06:58:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:42.786 06:58:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:45.328 06:58:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:45.328 06:58:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:45.328 06:58:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:18:45.328 06:58:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:45.328 06:58:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.328 06:58:51 -- common/autotest_common.sh@1187 -- # return 0 00:18:45.328 06:58:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.328 06:58:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:45.588 06:58:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:45.588 06:58:52 -- common/autotest_common.sh@1177 -- # local i=0 00:18:45.588 06:58:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:45.588 06:58:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:45.589 06:58:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:47.497 06:58:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:47.497 06:58:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:47.497 06:58:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:18:47.497 06:58:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:47.497 06:58:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.497 06:58:54 -- common/autotest_common.sh@1187 -- # return 0 00:18:47.497 06:58:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.497 06:58:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:48.433 06:58:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:48.433 06:58:55 -- common/autotest_common.sh@1177 -- # local i=0 00:18:48.433 06:58:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.433 06:58:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:48.433 06:58:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:50.340 06:58:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:50.340 06:58:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:50.340 06:58:57 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:18:50.340 06:58:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:50.340 06:58:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.340 06:58:57 -- common/autotest_common.sh@1187 -- # return 0 00:18:50.340 06:58:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:50.340 06:58:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:50.910 06:58:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:50.910 06:58:57 -- common/autotest_common.sh@1177 -- # local i=0 00:18:50.910 06:58:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:50.910 06:58:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:50.910 06:58:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:52.811 06:58:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:52.811 06:58:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:52.811 06:58:59 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:18:52.811 06:58:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:52.811 06:58:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.811 06:58:59 -- common/autotest_common.sh@1187 -- # return 0 00:18:52.811 06:58:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.811 06:58:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:53.747 06:59:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:53.747 06:59:00 -- common/autotest_common.sh@1177 -- # local i=0 00:18:53.747 06:59:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:53.747 06:59:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:53.747 06:59:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:55.651 06:59:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:55.651 06:59:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:55.651 06:59:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:18:55.651 06:59:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:55.651 06:59:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:55.651 06:59:02 -- common/autotest_common.sh@1187 -- # return 0 00:18:55.651 06:59:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:55.651 06:59:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:56.584 06:59:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:56.584 06:59:03 -- common/autotest_common.sh@1177 -- # local i=0 00:18:56.584 06:59:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.584 06:59:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:56.584 06:59:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:58.490 06:59:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:58.490 06:59:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:58.490 06:59:05 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:18:58.490 06:59:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:58.490 06:59:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.490 06:59:05 -- common/autotest_common.sh@1187 -- # return 0 00:18:58.490 06:59:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.490 06:59:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:59.103 06:59:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:59.103 06:59:06 -- common/autotest_common.sh@1177 -- # local i=0 00:18:59.103 06:59:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:59.103 06:59:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:59.103 06:59:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:01.022 06:59:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:01.022 06:59:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:01.022 06:59:08 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:19:01.022 06:59:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:01.022 06:59:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:01.022 06:59:08 -- common/autotest_common.sh@1187 -- # return 0 00:19:01.022 06:59:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.022 06:59:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:01.958 06:59:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:01.958 06:59:08 -- common/autotest_common.sh@1177 -- # local i=0 00:19:01.958 06:59:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:01.958 06:59:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:01.958 06:59:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:03.863 06:59:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:03.863 06:59:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:03.863 06:59:10 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:19:03.863 06:59:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:03.863 06:59:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.863 06:59:10 -- common/autotest_common.sh@1187 -- # return 0 00:19:03.863 06:59:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.863 06:59:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:05.239 06:59:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:05.239 06:59:11 -- common/autotest_common.sh@1177 -- # local i=0 00:19:05.239 06:59:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.239 06:59:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:05.239 06:59:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:07.138 06:59:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:07.138 06:59:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:07.138 06:59:13 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:19:07.138 06:59:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:07.138 06:59:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.138 06:59:13 -- common/autotest_common.sh@1187 -- # return 0 00:19:07.138 06:59:13 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:07.138 [global] 00:19:07.138 thread=1 00:19:07.138 invalidate=1 00:19:07.138 rw=read 00:19:07.138 time_based=1 00:19:07.138 runtime=10 00:19:07.138 ioengine=libaio 00:19:07.138 direct=1 00:19:07.138 bs=262144 00:19:07.138 iodepth=64 00:19:07.138 norandommap=1 00:19:07.138 numjobs=1 00:19:07.138 00:19:07.138 [job0] 00:19:07.139 filename=/dev/nvme0n1 00:19:07.139 [job1] 00:19:07.139 filename=/dev/nvme10n1 00:19:07.139 [job2] 00:19:07.139 filename=/dev/nvme1n1 00:19:07.139 [job3] 00:19:07.139 filename=/dev/nvme2n1 00:19:07.139 [job4] 00:19:07.139 filename=/dev/nvme3n1 00:19:07.139 [job5] 00:19:07.139 filename=/dev/nvme4n1 00:19:07.139 [job6] 00:19:07.139 filename=/dev/nvme5n1 00:19:07.139 [job7] 00:19:07.139 filename=/dev/nvme6n1 00:19:07.139 [job8] 00:19:07.139 filename=/dev/nvme7n1 00:19:07.139 [job9] 00:19:07.139 filename=/dev/nvme8n1 00:19:07.139 [job10] 00:19:07.139 filename=/dev/nvme9n1 00:19:07.139 Could not set queue depth (nvme0n1) 00:19:07.139 Could not set queue depth (nvme10n1) 00:19:07.139 Could not set queue depth (nvme1n1) 00:19:07.139 Could not set queue depth (nvme2n1) 00:19:07.139 Could not set queue depth (nvme3n1) 00:19:07.139 Could not set queue depth (nvme4n1) 00:19:07.139 Could not set queue depth (nvme5n1) 00:19:07.139 Could not set queue depth (nvme6n1) 00:19:07.139 Could not set queue depth (nvme7n1) 00:19:07.139 Could not set queue depth (nvme8n1) 00:19:07.139 Could not set queue depth (nvme9n1) 00:19:07.139 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:07.139 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:07.139 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:07.139 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:07.139 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:07.139 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:07.139 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:07.139 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:07.139 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:07.139 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:07.139 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:07.139 fio-3.35 00:19:07.139 Starting 11 threads 00:19:19.392 00:19:19.392 job0: (groupid=0, jobs=1): err= 0: pid=3066802: Sun May 12 06:59:24 2024 00:19:19.392 read: IOPS=596, BW=149MiB/s (156MB/s)(1508MiB/10117msec) 00:19:19.392 slat (usec): min=13, max=167396, avg=1602.52, stdev=5930.63 00:19:19.392 clat (msec): min=36, max=433, avg=105.65, stdev=61.43 00:19:19.392 lat (msec): min=36, max=438, avg=107.25, stdev=62.33 00:19:19.392 clat percentiles (msec): 00:19:19.392 | 1.00th=[ 42], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 54], 00:19:19.392 | 30.00th=[ 66], 40.00th=[ 77], 50.00th=[ 87], 60.00th=[ 105], 00:19:19.392 | 70.00th=[ 128], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 207], 00:19:19.392 | 99.00th=[ 355], 99.50th=[ 393], 99.90th=[ 430], 99.95th=[ 430], 00:19:19.392 | 99.99th=[ 435] 00:19:19.392 bw ( KiB/s): min=57344, max=329580, per=9.09%, avg=152750.20, stdev=76966.50, samples=20 00:19:19.392 iops : min= 224, max= 1287, avg=596.55, stdev=300.66, samples=20 00:19:19.392 lat (msec) : 50=15.09%, 100=42.84%, 250=39.46%, 500=2.62% 00:19:19.392 cpu : usr=0.51%, sys=2.05%, ctx=1278, majf=0, minf=4097 00:19:19.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:19.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.392 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.392 job1: (groupid=0, jobs=1): err= 0: pid=3066803: Sun May 12 06:59:24 2024 00:19:19.392 read: IOPS=613, BW=153MiB/s (161MB/s)(1550MiB/10106msec) 00:19:19.392 slat (usec): min=10, max=239048, avg=1218.85, stdev=5024.23 00:19:19.392 clat (usec): min=1169, max=424208, avg=103004.45, stdev=46095.20 00:19:19.392 lat (usec): min=1199, max=424237, avg=104223.30, stdev=46658.11 00:19:19.392 clat percentiles (msec): 00:19:19.392 | 1.00th=[ 5], 5.00th=[ 45], 10.00th=[ 58], 20.00th=[ 71], 00:19:19.392 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 107], 00:19:19.392 | 70.00th=[ 120], 80.00th=[ 133], 90.00th=[ 155], 95.00th=[ 176], 00:19:19.392 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 363], 99.95th=[ 372], 00:19:19.392 | 99.99th=[ 426] 00:19:19.392 bw ( KiB/s): min=62464, max=253440, per=9.35%, avg=157083.90, stdev=43702.11, samples=20 00:19:19.392 iops : min= 244, max= 990, avg=613.55, stdev=170.68, samples=20 00:19:19.392 lat (msec) : 2=0.11%, 4=0.47%, 10=1.34%, 20=0.69%, 50=3.42% 00:19:19.392 lat (msec) : 100=48.35%, 250=44.53%, 500=1.10% 00:19:19.392 cpu : usr=0.34%, sys=2.08%, ctx=1364, majf=0, minf=4097 00:19:19.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:19.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.392 issued rwts: total=6201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.392 job2: (groupid=0, jobs=1): err= 0: pid=3066804: Sun May 12 06:59:24 2024 00:19:19.392 read: IOPS=503, BW=126MiB/s (132MB/s)(1272MiB/10104msec) 00:19:19.392 slat (usec): min=10, max=83464, avg=1708.02, stdev=4994.43 00:19:19.392 clat (msec): min=4, max=272, avg=125.31, stdev=38.71 00:19:19.392 lat (msec): min=4, max=313, avg=127.02, stdev=39.32 00:19:19.392 clat percentiles (msec): 00:19:19.392 | 1.00th=[ 15], 5.00th=[ 74], 10.00th=[ 84], 20.00th=[ 96], 00:19:19.392 | 30.00th=[ 106], 40.00th=[ 114], 50.00th=[ 122], 60.00th=[ 128], 00:19:19.392 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 180], 95.00th=[ 194], 00:19:19.392 | 99.00th=[ 232], 99.50th=[ 257], 99.90th=[ 271], 99.95th=[ 271], 00:19:19.392 | 99.99th=[ 271] 00:19:19.392 bw ( KiB/s): min=79872, max=169472, per=7.65%, avg=128568.80, stdev=25936.08, samples=20 00:19:19.392 iops : min= 312, max= 662, avg=502.20, stdev=101.34, samples=20 00:19:19.392 lat (msec) : 10=0.87%, 20=0.20%, 50=0.41%, 100=22.53%, 250=75.40% 00:19:19.392 lat (msec) : 500=0.59% 00:19:19.392 cpu : usr=0.35%, sys=1.79%, ctx=1104, majf=0, minf=4097 00:19:19.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:19.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.392 issued rwts: total=5086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.393 job3: (groupid=0, jobs=1): err= 0: pid=3066805: Sun May 12 06:59:24 2024 00:19:19.393 read: IOPS=563, BW=141MiB/s (148MB/s)(1422MiB/10099msec) 00:19:19.393 slat (usec): min=10, max=131226, avg=1176.35, stdev=5428.86 00:19:19.393 clat (msec): min=4, max=427, avg=112.39, stdev=64.72 00:19:19.393 lat (msec): min=4, max=459, avg=113.57, stdev=65.45 00:19:19.393 clat percentiles (msec): 00:19:19.393 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 40], 20.00th=[ 58], 00:19:19.393 | 30.00th=[ 85], 40.00th=[ 94], 50.00th=[ 103], 60.00th=[ 115], 00:19:19.393 | 70.00th=[ 131], 80.00th=[ 155], 90.00th=[ 190], 95.00th=[ 218], 00:19:19.393 | 99.00th=[ 355], 99.50th=[ 368], 99.90th=[ 397], 99.95th=[ 418], 00:19:19.393 | 99.99th=[ 430] 00:19:19.393 bw ( KiB/s): min=53760, max=269312, per=8.57%, avg=143923.75, stdev=56274.81, samples=20 00:19:19.393 iops : min= 210, max= 1052, avg=562.15, stdev=219.83, samples=20 00:19:19.393 lat (msec) : 10=1.34%, 20=2.94%, 50=11.70%, 100=32.01%, 250=48.51% 00:19:19.393 lat (msec) : 500=3.52% 00:19:19.393 cpu : usr=0.35%, sys=2.09%, ctx=1534, majf=0, minf=3721 00:19:19.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.393 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.393 job4: (groupid=0, jobs=1): err= 0: pid=3066806: Sun May 12 06:59:24 2024 00:19:19.393 read: IOPS=579, BW=145MiB/s (152MB/s)(1465MiB/10105msec) 00:19:19.393 slat (usec): min=9, max=172123, avg=1130.42, stdev=5689.19 00:19:19.393 clat (msec): min=3, max=364, avg=109.15, stdev=59.28 00:19:19.393 lat (msec): min=3, max=373, avg=110.28, stdev=60.08 00:19:19.393 clat percentiles (msec): 00:19:19.393 | 1.00th=[ 14], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 46], 00:19:19.393 | 30.00th=[ 78], 40.00th=[ 96], 50.00th=[ 111], 60.00th=[ 126], 00:19:19.393 | 70.00th=[ 138], 80.00th=[ 153], 90.00th=[ 182], 95.00th=[ 211], 00:19:19.393 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 355], 99.95th=[ 359], 00:19:19.393 | 99.99th=[ 363] 00:19:19.393 bw ( KiB/s): min=62976, max=331776, per=8.83%, avg=148390.95, stdev=62831.63, samples=20 00:19:19.393 iops : min= 246, max= 1296, avg=579.55, stdev=245.47, samples=20 00:19:19.393 lat (msec) : 4=0.07%, 10=0.56%, 20=2.12%, 50=19.56%, 100=20.99% 00:19:19.393 lat (msec) : 250=54.73%, 500=1.98% 00:19:19.393 cpu : usr=0.35%, sys=2.23%, ctx=1563, majf=0, minf=4097 00:19:19.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.393 issued rwts: total=5860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.393 job5: (groupid=0, jobs=1): err= 0: pid=3066809: Sun May 12 06:59:24 2024 00:19:19.393 read: IOPS=619, BW=155MiB/s (162MB/s)(1564MiB/10104msec) 00:19:19.393 slat (usec): min=9, max=213413, avg=1086.02, stdev=4974.78 00:19:19.393 clat (usec): min=1143, max=388313, avg=102178.97, stdev=54974.05 00:19:19.393 lat (usec): min=1202, max=388354, avg=103265.00, stdev=55657.83 00:19:19.393 clat percentiles (msec): 00:19:19.393 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 36], 20.00th=[ 57], 00:19:19.393 | 30.00th=[ 77], 40.00th=[ 91], 50.00th=[ 102], 60.00th=[ 113], 00:19:19.393 | 70.00th=[ 124], 80.00th=[ 138], 90.00th=[ 165], 95.00th=[ 190], 00:19:19.393 | 99.00th=[ 347], 99.50th=[ 372], 99.90th=[ 376], 99.95th=[ 376], 00:19:19.393 | 99.99th=[ 388] 00:19:19.393 bw ( KiB/s): min=93696, max=251392, per=9.44%, avg=158558.80, stdev=43121.22, samples=20 00:19:19.393 iops : min= 366, max= 982, avg=619.30, stdev=168.40, samples=20 00:19:19.393 lat (msec) : 2=0.22%, 4=0.69%, 10=2.09%, 20=2.57%, 50=11.52% 00:19:19.393 lat (msec) : 100=31.55%, 250=50.17%, 500=1.18% 00:19:19.393 cpu : usr=0.48%, sys=2.14%, ctx=1715, majf=0, minf=4097 00:19:19.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.393 issued rwts: total=6257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.393 job6: (groupid=0, jobs=1): err= 0: pid=3066810: Sun May 12 06:59:24 2024 00:19:19.393 read: IOPS=548, BW=137MiB/s (144MB/s)(1386MiB/10102msec) 00:19:19.393 slat (usec): min=10, max=148388, avg=1193.36, stdev=5114.67 00:19:19.393 clat (usec): min=1126, max=355532, avg=115281.36, stdev=57975.75 00:19:19.393 lat (usec): min=1182, max=378219, avg=116474.72, stdev=58536.72 00:19:19.393 clat percentiles (msec): 00:19:19.393 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 36], 20.00th=[ 73], 00:19:19.393 | 30.00th=[ 87], 40.00th=[ 97], 50.00th=[ 110], 60.00th=[ 125], 00:19:19.393 | 70.00th=[ 140], 80.00th=[ 159], 90.00th=[ 188], 95.00th=[ 213], 00:19:19.393 | 99.00th=[ 275], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 351], 00:19:19.393 | 99.99th=[ 355] 00:19:19.393 bw ( KiB/s): min=72704, max=254464, per=8.35%, avg=140316.90, stdev=45580.52, samples=20 00:19:19.393 iops : min= 284, max= 994, avg=548.10, stdev=178.06, samples=20 00:19:19.393 lat (msec) : 2=0.14%, 4=0.50%, 10=0.78%, 20=3.46%, 50=7.41% 00:19:19.393 lat (msec) : 100=29.94%, 250=55.60%, 500=2.16% 00:19:19.393 cpu : usr=0.32%, sys=1.96%, ctx=1484, majf=0, minf=4097 00:19:19.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.393 issued rwts: total=5545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.393 job7: (groupid=0, jobs=1): err= 0: pid=3066811: Sun May 12 06:59:24 2024 00:19:19.393 read: IOPS=557, BW=139MiB/s (146MB/s)(1411MiB/10117msec) 00:19:19.393 slat (usec): min=9, max=121586, avg=1288.09, stdev=5009.66 00:19:19.393 clat (msec): min=3, max=447, avg=113.38, stdev=57.56 00:19:19.393 lat (msec): min=3, max=447, avg=114.67, stdev=58.28 00:19:19.393 clat percentiles (msec): 00:19:19.393 | 1.00th=[ 24], 5.00th=[ 48], 10.00th=[ 65], 20.00th=[ 75], 00:19:19.393 | 30.00th=[ 82], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 109], 00:19:19.394 | 70.00th=[ 127], 80.00th=[ 150], 90.00th=[ 188], 95.00th=[ 220], 00:19:19.394 | 99.00th=[ 338], 99.50th=[ 401], 99.90th=[ 439], 99.95th=[ 439], 00:19:19.394 | 99.99th=[ 447] 00:19:19.394 bw ( KiB/s): min=63488, max=218624, per=8.50%, avg=142777.45, stdev=45396.70, samples=20 00:19:19.394 iops : min= 248, max= 854, avg=557.65, stdev=177.34, samples=20 00:19:19.394 lat (msec) : 4=0.02%, 10=0.05%, 20=0.73%, 50=4.45%, 100=48.92% 00:19:19.394 lat (msec) : 250=43.35%, 500=2.48% 00:19:19.394 cpu : usr=0.24%, sys=1.91%, ctx=1469, majf=0, minf=4097 00:19:19.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:19.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.394 issued rwts: total=5642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.394 job8: (groupid=0, jobs=1): err= 0: pid=3066814: Sun May 12 06:59:24 2024 00:19:19.394 read: IOPS=803, BW=201MiB/s (211MB/s)(2012MiB/10017msec) 00:19:19.394 slat (usec): min=9, max=199897, avg=669.41, stdev=4164.59 00:19:19.394 clat (usec): min=1677, max=451901, avg=78936.87, stdev=62639.28 00:19:19.394 lat (usec): min=1703, max=451941, avg=79606.28, stdev=63018.20 00:19:19.394 clat percentiles (msec): 00:19:19.394 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 22], 20.00th=[ 34], 00:19:19.394 | 30.00th=[ 37], 40.00th=[ 45], 50.00th=[ 57], 60.00th=[ 80], 00:19:19.394 | 70.00th=[ 102], 80.00th=[ 128], 90.00th=[ 165], 95.00th=[ 190], 00:19:19.394 | 99.00th=[ 284], 99.50th=[ 397], 99.90th=[ 435], 99.95th=[ 435], 00:19:19.394 | 99.99th=[ 451] 00:19:19.394 bw ( KiB/s): min=52736, max=434688, per=12.16%, avg=204320.35, stdev=112790.85, samples=20 00:19:19.394 iops : min= 206, max= 1698, avg=798.05, stdev=440.49, samples=20 00:19:19.394 lat (msec) : 2=0.12%, 4=2.27%, 10=4.39%, 20=2.71%, 50=36.15% 00:19:19.394 lat (msec) : 100=23.82%, 250=28.87%, 500=1.67% 00:19:19.394 cpu : usr=0.39%, sys=2.47%, ctx=2198, majf=0, minf=4097 00:19:19.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:19.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.394 issued rwts: total=8047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.394 job9: (groupid=0, jobs=1): err= 0: pid=3066815: Sun May 12 06:59:24 2024 00:19:19.394 read: IOPS=640, BW=160MiB/s (168MB/s)(1618MiB/10098msec) 00:19:19.394 slat (usec): min=9, max=124067, avg=1058.89, stdev=4734.52 00:19:19.394 clat (usec): min=1889, max=413148, avg=98716.08, stdev=61005.40 00:19:19.394 lat (usec): min=1939, max=413199, avg=99774.98, stdev=61543.75 00:19:19.394 clat percentiles (msec): 00:19:19.394 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 44], 00:19:19.394 | 30.00th=[ 65], 40.00th=[ 81], 50.00th=[ 91], 60.00th=[ 103], 00:19:19.394 | 70.00th=[ 113], 80.00th=[ 133], 90.00th=[ 186], 95.00th=[ 220], 00:19:19.394 | 99.00th=[ 300], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 355], 00:19:19.394 | 99.99th=[ 414] 00:19:19.394 bw ( KiB/s): min=73216, max=403456, per=9.76%, avg=164075.25, stdev=68938.87, samples=20 00:19:19.394 iops : min= 286, max= 1576, avg=640.80, stdev=269.28, samples=20 00:19:19.394 lat (msec) : 2=0.06%, 4=0.46%, 10=1.30%, 20=0.80%, 50=20.63% 00:19:19.394 lat (msec) : 100=35.18%, 250=38.29%, 500=3.28% 00:19:19.394 cpu : usr=0.32%, sys=2.05%, ctx=1582, majf=0, minf=4097 00:19:19.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:19.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.394 issued rwts: total=6472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.394 job10: (groupid=0, jobs=1): err= 0: pid=3066816: Sun May 12 06:59:24 2024 00:19:19.394 read: IOPS=555, BW=139MiB/s (146MB/s)(1394MiB/10036msec) 00:19:19.394 slat (usec): min=14, max=138956, avg=1558.36, stdev=4963.45 00:19:19.394 clat (msec): min=4, max=418, avg=113.55, stdev=49.95 00:19:19.394 lat (msec): min=5, max=449, avg=115.11, stdev=50.54 00:19:19.395 clat percentiles (msec): 00:19:19.395 | 1.00th=[ 19], 5.00th=[ 63], 10.00th=[ 66], 20.00th=[ 79], 00:19:19.395 | 30.00th=[ 89], 40.00th=[ 97], 50.00th=[ 106], 60.00th=[ 116], 00:19:19.395 | 70.00th=[ 128], 80.00th=[ 138], 90.00th=[ 161], 95.00th=[ 188], 00:19:19.395 | 99.00th=[ 342], 99.50th=[ 372], 99.90th=[ 414], 99.95th=[ 414], 00:19:19.395 | 99.99th=[ 418] 00:19:19.395 bw ( KiB/s): min=64512, max=235520, per=8.40%, avg=141079.75, stdev=45744.54, samples=20 00:19:19.395 iops : min= 252, max= 920, avg=551.05, stdev=178.68, samples=20 00:19:19.395 lat (msec) : 10=0.50%, 20=0.57%, 50=1.31%, 100=41.63%, 250=53.38% 00:19:19.395 lat (msec) : 500=2.60% 00:19:19.395 cpu : usr=0.33%, sys=2.12%, ctx=1261, majf=0, minf=4097 00:19:19.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:19.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.395 issued rwts: total=5575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.395 00:19:19.395 Run status group 0 (all jobs): 00:19:19.395 READ: bw=1641MiB/s (1721MB/s), 126MiB/s-201MiB/s (132MB/s-211MB/s), io=16.2GiB (17.4GB), run=10017-10117msec 00:19:19.395 00:19:19.395 Disk stats (read/write): 00:19:19.395 nvme0n1: ios=11873/0, merge=0/0, ticks=1228039/0, in_queue=1228039, util=97.13% 00:19:19.395 nvme10n1: ios=12212/0, merge=0/0, ticks=1232601/0, in_queue=1232601, util=97.37% 00:19:19.395 nvme1n1: ios=9981/0, merge=0/0, ticks=1234207/0, in_queue=1234207, util=97.66% 00:19:19.395 nvme2n1: ios=11197/0, merge=0/0, ticks=1235751/0, in_queue=1235751, util=97.80% 00:19:19.395 nvme3n1: ios=11554/0, merge=0/0, ticks=1237514/0, in_queue=1237514, util=97.86% 00:19:19.395 nvme4n1: ios=12305/0, merge=0/0, ticks=1233547/0, in_queue=1233547, util=98.20% 00:19:19.395 nvme5n1: ios=10882/0, merge=0/0, ticks=1233705/0, in_queue=1233705, util=98.37% 00:19:19.395 nvme6n1: ios=11085/0, merge=0/0, ticks=1232834/0, in_queue=1232834, util=98.45% 00:19:19.395 nvme7n1: ios=15723/0, merge=0/0, ticks=1247988/0, in_queue=1247988, util=98.90% 00:19:19.395 nvme8n1: ios=12739/0, merge=0/0, ticks=1234256/0, in_queue=1234256, util=99.06% 00:19:19.395 nvme9n1: ios=10915/0, merge=0/0, ticks=1234005/0, in_queue=1234005, util=99.21% 00:19:19.395 06:59:24 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:19.395 [global] 00:19:19.395 thread=1 00:19:19.395 invalidate=1 00:19:19.395 rw=randwrite 00:19:19.395 time_based=1 00:19:19.395 runtime=10 00:19:19.395 ioengine=libaio 00:19:19.395 direct=1 00:19:19.395 bs=262144 00:19:19.395 iodepth=64 00:19:19.395 norandommap=1 00:19:19.395 numjobs=1 00:19:19.395 00:19:19.395 [job0] 00:19:19.395 filename=/dev/nvme0n1 00:19:19.395 [job1] 00:19:19.395 filename=/dev/nvme10n1 00:19:19.395 [job2] 00:19:19.395 filename=/dev/nvme1n1 00:19:19.395 [job3] 00:19:19.395 filename=/dev/nvme2n1 00:19:19.395 [job4] 00:19:19.395 filename=/dev/nvme3n1 00:19:19.395 [job5] 00:19:19.395 filename=/dev/nvme4n1 00:19:19.395 [job6] 00:19:19.395 filename=/dev/nvme5n1 00:19:19.395 [job7] 00:19:19.395 filename=/dev/nvme6n1 00:19:19.395 [job8] 00:19:19.395 filename=/dev/nvme7n1 00:19:19.395 [job9] 00:19:19.395 filename=/dev/nvme8n1 00:19:19.395 [job10] 00:19:19.395 filename=/dev/nvme9n1 00:19:19.395 Could not set queue depth (nvme0n1) 00:19:19.395 Could not set queue depth (nvme10n1) 00:19:19.395 Could not set queue depth (nvme1n1) 00:19:19.395 Could not set queue depth (nvme2n1) 00:19:19.395 Could not set queue depth (nvme3n1) 00:19:19.395 Could not set queue depth (nvme4n1) 00:19:19.395 Could not set queue depth (nvme5n1) 00:19:19.395 Could not set queue depth (nvme6n1) 00:19:19.395 Could not set queue depth (nvme7n1) 00:19:19.395 Could not set queue depth (nvme8n1) 00:19:19.395 Could not set queue depth (nvme9n1) 00:19:19.395 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.395 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.395 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.395 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.395 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.395 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.395 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.395 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.395 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.395 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.395 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.395 fio-3.35 00:19:19.395 Starting 11 threads 00:19:29.375 00:19:29.375 job0: (groupid=0, jobs=1): err= 0: pid=3067869: Sun May 12 06:59:35 2024 00:19:29.375 write: IOPS=491, BW=123MiB/s (129MB/s)(1243MiB/10126msec); 0 zone resets 00:19:29.375 slat (usec): min=19, max=153987, avg=1543.26, stdev=4350.58 00:19:29.375 clat (msec): min=2, max=319, avg=128.71, stdev=54.87 00:19:29.375 lat (msec): min=2, max=332, avg=130.26, stdev=55.43 00:19:29.375 clat percentiles (msec): 00:19:29.375 | 1.00th=[ 12], 5.00th=[ 34], 10.00th=[ 54], 20.00th=[ 88], 00:19:29.375 | 30.00th=[ 104], 40.00th=[ 116], 50.00th=[ 129], 60.00th=[ 140], 00:19:29.375 | 70.00th=[ 155], 80.00th=[ 171], 90.00th=[ 209], 95.00th=[ 224], 00:19:29.375 | 99.00th=[ 253], 99.50th=[ 292], 99.90th=[ 313], 99.95th=[ 321], 00:19:29.375 | 99.99th=[ 321] 00:19:29.375 bw ( KiB/s): min=80896, max=268800, per=10.02%, avg=125687.00, stdev=42239.22, samples=20 00:19:29.375 iops : min= 316, max= 1050, avg=490.95, stdev=165.01, samples=20 00:19:29.375 lat (msec) : 4=0.14%, 10=0.60%, 20=1.69%, 50=6.74%, 100=18.24% 00:19:29.375 lat (msec) : 250=71.51%, 500=1.09% 00:19:29.375 cpu : usr=1.60%, sys=1.58%, ctx=2356, majf=0, minf=1 00:19:29.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:29.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.375 issued rwts: total=0,4973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.375 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.375 job1: (groupid=0, jobs=1): err= 0: pid=3067881: Sun May 12 06:59:35 2024 00:19:29.375 write: IOPS=532, BW=133MiB/s (140MB/s)(1359MiB/10206msec); 0 zone resets 00:19:29.375 slat (usec): min=20, max=89759, avg=985.69, stdev=3887.68 00:19:29.375 clat (usec): min=1666, max=479138, avg=119112.66, stdev=76606.67 00:19:29.375 lat (usec): min=1699, max=479205, avg=120098.35, stdev=77470.98 00:19:29.375 clat percentiles (msec): 00:19:29.375 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 52], 00:19:29.375 | 30.00th=[ 74], 40.00th=[ 92], 50.00th=[ 110], 60.00th=[ 132], 00:19:29.375 | 70.00th=[ 150], 80.00th=[ 174], 90.00th=[ 203], 95.00th=[ 243], 00:19:29.375 | 99.00th=[ 418], 99.50th=[ 435], 99.90th=[ 477], 99.95th=[ 481], 00:19:29.375 | 99.99th=[ 481] 00:19:29.375 bw ( KiB/s): min=45056, max=287744, per=10.96%, avg=137471.45, stdev=54553.91, samples=20 00:19:29.375 iops : min= 176, max= 1124, avg=536.90, stdev=213.12, samples=20 00:19:29.375 lat (msec) : 2=0.04%, 4=0.24%, 10=1.86%, 20=3.66%, 50=13.60% 00:19:29.375 lat (msec) : 100=24.68%, 250=51.32%, 500=4.60% 00:19:29.375 cpu : usr=1.83%, sys=1.91%, ctx=3751, majf=0, minf=1 00:19:29.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:29.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.375 issued rwts: total=0,5434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.375 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.375 job2: (groupid=0, jobs=1): err= 0: pid=3067882: Sun May 12 06:59:35 2024 00:19:29.375 write: IOPS=440, BW=110MiB/s (115MB/s)(1114MiB/10127msec); 0 zone resets 00:19:29.375 slat (usec): min=23, max=95170, avg=1780.68, stdev=4420.77 00:19:29.375 clat (usec): min=1568, max=446557, avg=143572.60, stdev=66401.29 00:19:29.375 lat (usec): min=1639, max=446603, avg=145353.28, stdev=67282.30 00:19:29.375 clat percentiles (msec): 00:19:29.375 | 1.00th=[ 9], 5.00th=[ 38], 10.00th=[ 63], 20.00th=[ 92], 00:19:29.375 | 30.00th=[ 107], 40.00th=[ 124], 50.00th=[ 140], 60.00th=[ 159], 00:19:29.375 | 70.00th=[ 180], 80.00th=[ 194], 90.00th=[ 226], 95.00th=[ 241], 00:19:29.375 | 99.00th=[ 363], 99.50th=[ 397], 99.90th=[ 426], 99.95th=[ 426], 00:19:29.375 | 99.99th=[ 447] 00:19:29.375 bw ( KiB/s): min=51712, max=191488, per=8.97%, avg=112474.55, stdev=37924.39, samples=20 00:19:29.375 iops : min= 202, max= 748, avg=439.35, stdev=148.14, samples=20 00:19:29.375 lat (msec) : 2=0.07%, 4=0.36%, 10=1.10%, 20=1.95%, 50=3.84% 00:19:29.375 lat (msec) : 100=17.64%, 250=71.44%, 500=3.61% 00:19:29.375 cpu : usr=1.47%, sys=1.71%, ctx=2141, majf=0, minf=1 00:19:29.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:29.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.375 issued rwts: total=0,4457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.375 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.375 job3: (groupid=0, jobs=1): err= 0: pid=3067883: Sun May 12 06:59:35 2024 00:19:29.375 write: IOPS=462, BW=116MiB/s (121MB/s)(1165MiB/10080msec); 0 zone resets 00:19:29.375 slat (usec): min=22, max=43490, avg=1654.79, stdev=4134.54 00:19:29.375 clat (usec): min=1911, max=350487, avg=136789.66, stdev=71492.87 00:19:29.375 lat (usec): min=1999, max=350525, avg=138444.46, stdev=72462.12 00:19:29.375 clat percentiles (msec): 00:19:29.375 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 46], 20.00th=[ 75], 00:19:29.376 | 30.00th=[ 90], 40.00th=[ 107], 50.00th=[ 133], 60.00th=[ 157], 00:19:29.376 | 70.00th=[ 171], 80.00th=[ 197], 90.00th=[ 236], 95.00th=[ 271], 00:19:29.376 | 99.00th=[ 305], 99.50th=[ 321], 99.90th=[ 338], 99.95th=[ 347], 00:19:29.376 | 99.99th=[ 351] 00:19:29.376 bw ( KiB/s): min=63488, max=196608, per=9.38%, avg=117618.50, stdev=41658.70, samples=20 00:19:29.376 iops : min= 248, max= 768, avg=459.40, stdev=162.71, samples=20 00:19:29.376 lat (msec) : 2=0.04%, 4=0.15%, 10=0.86%, 20=2.60%, 50=7.84% 00:19:29.376 lat (msec) : 100=24.17%, 250=56.66%, 500=7.69% 00:19:29.376 cpu : usr=1.36%, sys=1.87%, ctx=2402, majf=0, minf=1 00:19:29.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:29.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.376 issued rwts: total=0,4658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.376 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.376 job4: (groupid=0, jobs=1): err= 0: pid=3067884: Sun May 12 06:59:35 2024 00:19:29.376 write: IOPS=427, BW=107MiB/s (112MB/s)(1084MiB/10136msec); 0 zone resets 00:19:29.376 slat (usec): min=24, max=85636, avg=1966.15, stdev=4838.88 00:19:29.376 clat (msec): min=2, max=494, avg=147.51, stdev=74.63 00:19:29.376 lat (msec): min=3, max=494, avg=149.48, stdev=75.61 00:19:29.376 clat percentiles (msec): 00:19:29.376 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 57], 20.00th=[ 94], 00:19:29.376 | 30.00th=[ 116], 40.00th=[ 128], 50.00th=[ 144], 60.00th=[ 159], 00:19:29.376 | 70.00th=[ 169], 80.00th=[ 197], 90.00th=[ 230], 95.00th=[ 300], 00:19:29.376 | 99.00th=[ 372], 99.50th=[ 409], 99.90th=[ 472], 99.95th=[ 472], 00:19:29.376 | 99.99th=[ 493] 00:19:29.376 bw ( KiB/s): min=49664, max=186368, per=8.72%, avg=109379.35, stdev=36495.86, samples=20 00:19:29.376 iops : min= 194, max= 728, avg=427.25, stdev=142.57, samples=20 00:19:29.376 lat (msec) : 4=0.16%, 10=1.50%, 20=1.94%, 50=5.42%, 100=12.06% 00:19:29.376 lat (msec) : 250=71.38%, 500=7.54% 00:19:29.376 cpu : usr=1.47%, sys=1.36%, ctx=1913, majf=0, minf=1 00:19:29.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:29.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.376 issued rwts: total=0,4336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.376 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.376 job5: (groupid=0, jobs=1): err= 0: pid=3067885: Sun May 12 06:59:35 2024 00:19:29.376 write: IOPS=444, BW=111MiB/s (117MB/s)(1134MiB/10194msec); 0 zone resets 00:19:29.376 slat (usec): min=16, max=59345, avg=1596.77, stdev=4433.66 00:19:29.376 clat (msec): min=3, max=424, avg=142.20, stdev=83.05 00:19:29.376 lat (msec): min=3, max=424, avg=143.79, stdev=84.09 00:19:29.376 clat percentiles (msec): 00:19:29.376 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 51], 20.00th=[ 67], 00:19:29.376 | 30.00th=[ 73], 40.00th=[ 102], 50.00th=[ 131], 60.00th=[ 161], 00:19:29.376 | 70.00th=[ 190], 80.00th=[ 220], 90.00th=[ 264], 95.00th=[ 288], 00:19:29.376 | 99.00th=[ 338], 99.50th=[ 376], 99.90th=[ 426], 99.95th=[ 426], 00:19:29.376 | 99.99th=[ 426] 00:19:29.376 bw ( KiB/s): min=59392, max=247808, per=9.13%, avg=114473.15, stdev=51296.46, samples=20 00:19:29.376 iops : min= 232, max= 968, avg=447.15, stdev=200.38, samples=20 00:19:29.376 lat (msec) : 4=0.11%, 10=1.32%, 20=1.92%, 50=6.55%, 100=29.22% 00:19:29.376 lat (msec) : 250=48.49%, 500=12.39% 00:19:29.376 cpu : usr=1.50%, sys=1.71%, ctx=2529, majf=0, minf=1 00:19:29.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:29.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.376 issued rwts: total=0,4535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.376 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.376 job6: (groupid=0, jobs=1): err= 0: pid=3067886: Sun May 12 06:59:35 2024 00:19:29.376 write: IOPS=441, BW=110MiB/s (116MB/s)(1119MiB/10136msec); 0 zone resets 00:19:29.376 slat (usec): min=23, max=95270, avg=1577.39, stdev=4923.80 00:19:29.376 clat (msec): min=2, max=456, avg=143.31, stdev=79.95 00:19:29.376 lat (msec): min=2, max=456, avg=144.89, stdev=81.05 00:19:29.376 clat percentiles (msec): 00:19:29.376 | 1.00th=[ 11], 5.00th=[ 31], 10.00th=[ 49], 20.00th=[ 71], 00:19:29.376 | 30.00th=[ 99], 40.00th=[ 118], 50.00th=[ 140], 60.00th=[ 161], 00:19:29.376 | 70.00th=[ 174], 80.00th=[ 197], 90.00th=[ 234], 95.00th=[ 313], 00:19:29.376 | 99.00th=[ 397], 99.50th=[ 414], 99.90th=[ 435], 99.95th=[ 439], 00:19:29.376 | 99.99th=[ 456] 00:19:29.376 bw ( KiB/s): min=51712, max=224768, per=9.00%, avg=112923.45, stdev=39157.48, samples=20 00:19:29.376 iops : min= 202, max= 878, avg=441.05, stdev=152.98, samples=20 00:19:29.376 lat (msec) : 4=0.09%, 10=0.74%, 20=1.54%, 50=8.58%, 100=19.89% 00:19:29.376 lat (msec) : 250=61.22%, 500=7.93% 00:19:29.376 cpu : usr=1.49%, sys=1.83%, ctx=2599, majf=0, minf=1 00:19:29.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:29.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.376 issued rwts: total=0,4474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.376 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.376 job7: (groupid=0, jobs=1): err= 0: pid=3067887: Sun May 12 06:59:35 2024 00:19:29.376 write: IOPS=406, BW=102MiB/s (106MB/s)(1035MiB/10196msec); 0 zone resets 00:19:29.376 slat (usec): min=24, max=129629, avg=1594.64, stdev=4986.41 00:19:29.376 clat (msec): min=2, max=439, avg=155.94, stdev=85.51 00:19:29.376 lat (msec): min=3, max=444, avg=157.53, stdev=86.35 00:19:29.376 clat percentiles (msec): 00:19:29.376 | 1.00th=[ 11], 5.00th=[ 21], 10.00th=[ 39], 20.00th=[ 71], 00:19:29.376 | 30.00th=[ 107], 40.00th=[ 136], 50.00th=[ 165], 60.00th=[ 180], 00:19:29.376 | 70.00th=[ 201], 80.00th=[ 222], 90.00th=[ 266], 95.00th=[ 300], 00:19:29.376 | 99.00th=[ 380], 99.50th=[ 405], 99.90th=[ 430], 99.95th=[ 435], 00:19:29.376 | 99.99th=[ 439] 00:19:29.376 bw ( KiB/s): min=54784, max=150528, per=8.32%, avg=104345.60, stdev=25657.93, samples=20 00:19:29.376 iops : min= 214, max= 588, avg=407.60, stdev=100.23, samples=20 00:19:29.376 lat (msec) : 4=0.05%, 10=0.94%, 20=3.77%, 50=9.61%, 100=13.57% 00:19:29.376 lat (msec) : 250=58.77%, 500=13.29% 00:19:29.376 cpu : usr=1.31%, sys=1.43%, ctx=2537, majf=0, minf=1 00:19:29.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:29.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.376 issued rwts: total=0,4140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.376 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.376 job8: (groupid=0, jobs=1): err= 0: pid=3067888: Sun May 12 06:59:35 2024 00:19:29.376 write: IOPS=405, BW=101MiB/s (106MB/s)(1021MiB/10072msec); 0 zone resets 00:19:29.376 slat (usec): min=22, max=113561, avg=2139.14, stdev=5743.16 00:19:29.376 clat (msec): min=2, max=410, avg=155.64, stdev=85.79 00:19:29.376 lat (msec): min=2, max=428, avg=157.77, stdev=86.90 00:19:29.376 clat percentiles (msec): 00:19:29.376 | 1.00th=[ 12], 5.00th=[ 47], 10.00th=[ 63], 20.00th=[ 80], 00:19:29.376 | 30.00th=[ 107], 40.00th=[ 117], 50.00th=[ 138], 60.00th=[ 161], 00:19:29.376 | 70.00th=[ 184], 80.00th=[ 228], 90.00th=[ 288], 95.00th=[ 321], 00:19:29.376 | 99.00th=[ 397], 99.50th=[ 405], 99.90th=[ 409], 99.95th=[ 409], 00:19:29.376 | 99.99th=[ 409] 00:19:29.376 bw ( KiB/s): min=47104, max=209408, per=8.20%, avg=102912.30, stdev=50739.18, samples=20 00:19:29.376 iops : min= 184, max= 818, avg=401.95, stdev=198.21, samples=20 00:19:29.376 lat (msec) : 4=0.07%, 10=0.61%, 20=1.37%, 50=3.58%, 100=20.74% 00:19:29.376 lat (msec) : 250=58.19%, 500=15.43% 00:19:29.376 cpu : usr=1.21%, sys=1.39%, ctx=1574, majf=0, minf=1 00:19:29.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:29.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.376 issued rwts: total=0,4083,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.376 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.376 job9: (groupid=0, jobs=1): err= 0: pid=3067889: Sun May 12 06:59:35 2024 00:19:29.376 write: IOPS=412, BW=103MiB/s (108MB/s)(1050MiB/10172msec); 0 zone resets 00:19:29.376 slat (usec): min=23, max=72183, avg=1601.40, stdev=4664.11 00:19:29.376 clat (usec): min=1843, max=473056, avg=153233.19, stdev=77242.36 00:19:29.376 lat (usec): min=1875, max=473094, avg=154834.59, stdev=78286.78 00:19:29.376 clat percentiles (msec): 00:19:29.376 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 52], 20.00th=[ 82], 00:19:29.376 | 30.00th=[ 107], 40.00th=[ 138], 50.00th=[ 163], 60.00th=[ 180], 00:19:29.376 | 70.00th=[ 192], 80.00th=[ 207], 90.00th=[ 232], 95.00th=[ 271], 00:19:29.376 | 99.00th=[ 393], 99.50th=[ 439], 99.90th=[ 472], 99.95th=[ 472], 00:19:29.376 | 99.99th=[ 472] 00:19:29.376 bw ( KiB/s): min=34816, max=200704, per=8.44%, avg=105917.75, stdev=35418.63, samples=20 00:19:29.376 iops : min= 136, max= 784, avg=413.70, stdev=138.30, samples=20 00:19:29.376 lat (msec) : 2=0.12%, 4=0.33%, 10=0.76%, 20=1.52%, 50=7.00% 00:19:29.376 lat (msec) : 100=17.90%, 250=64.96%, 500=7.40% 00:19:29.376 cpu : usr=1.40%, sys=1.53%, ctx=2501, majf=0, minf=1 00:19:29.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:29.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.376 issued rwts: total=0,4201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.376 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.376 job10: (groupid=0, jobs=1): err= 0: pid=3067890: Sun May 12 06:59:35 2024 00:19:29.376 write: IOPS=462, BW=116MiB/s (121MB/s)(1179MiB/10191msec); 0 zone resets 00:19:29.376 slat (usec): min=20, max=167988, avg=1145.89, stdev=4510.28 00:19:29.376 clat (usec): min=1419, max=475133, avg=137068.80, stdev=82101.22 00:19:29.376 lat (usec): min=1457, max=492365, avg=138214.68, stdev=82911.43 00:19:29.376 clat percentiles (msec): 00:19:29.376 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 24], 20.00th=[ 54], 00:19:29.376 | 30.00th=[ 83], 40.00th=[ 115], 50.00th=[ 144], 60.00th=[ 165], 00:19:29.376 | 70.00th=[ 182], 80.00th=[ 201], 90.00th=[ 230], 95.00th=[ 279], 00:19:29.376 | 99.00th=[ 355], 99.50th=[ 376], 99.90th=[ 414], 99.95th=[ 477], 00:19:29.376 | 99.99th=[ 477] 00:19:29.376 bw ( KiB/s): min=66560, max=191488, per=9.50%, avg=119108.65, stdev=38029.58, samples=20 00:19:29.376 iops : min= 260, max= 748, avg=465.25, stdev=148.52, samples=20 00:19:29.376 lat (msec) : 2=0.13%, 4=0.36%, 10=2.57%, 20=4.69%, 50=10.75% 00:19:29.376 lat (msec) : 100=16.54%, 250=57.57%, 500=7.40% 00:19:29.376 cpu : usr=1.52%, sys=1.92%, ctx=3380, majf=0, minf=1 00:19:29.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:29.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.377 issued rwts: total=0,4716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.377 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.377 00:19:29.377 Run status group 0 (all jobs): 00:19:29.377 WRITE: bw=1225MiB/s (1284MB/s), 101MiB/s-133MiB/s (106MB/s-140MB/s), io=12.2GiB (13.1GB), run=10072-10206msec 00:19:29.377 00:19:29.377 Disk stats (read/write): 00:19:29.377 nvme0n1: ios=54/9652, merge=0/0, ticks=3560/1198628, in_queue=1202188, util=99.75% 00:19:29.377 nvme10n1: ios=49/10813, merge=0/0, ticks=3153/1245596, in_queue=1248749, util=99.84% 00:19:29.377 nvme1n1: ios=45/8724, merge=0/0, ticks=282/1194855, in_queue=1195137, util=99.67% 00:19:29.377 nvme2n1: ios=0/9048, merge=0/0, ticks=0/1216507, in_queue=1216507, util=97.57% 00:19:29.377 nvme3n1: ios=45/8496, merge=0/0, ticks=1332/1190660, in_queue=1191992, util=99.82% 00:19:29.377 nvme4n1: ios=0/9038, merge=0/0, ticks=0/1241926, in_queue=1241926, util=98.06% 00:19:29.377 nvme5n1: ios=42/8757, merge=0/0, ticks=1044/1196171, in_queue=1197215, util=99.88% 00:19:29.377 nvme6n1: ios=36/8238, merge=0/0, ticks=44/1244192, in_queue=1244236, util=98.47% 00:19:29.377 nvme7n1: ios=38/7939, merge=0/0, ticks=2510/1200818, in_queue=1203328, util=99.92% 00:19:29.377 nvme8n1: ios=44/8388, merge=0/0, ticks=2090/1241446, in_queue=1243536, util=99.88% 00:19:29.377 nvme9n1: ios=0/9397, merge=0/0, ticks=0/1248845, in_queue=1248845, util=99.08% 00:19:29.377 06:59:35 -- target/multiconnection.sh@36 -- # sync 00:19:29.377 06:59:35 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:29.377 06:59:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.377 06:59:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:29.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:29.377 06:59:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:29.377 06:59:35 -- common/autotest_common.sh@1198 -- # local i=0 00:19:29.377 06:59:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:29.377 06:59:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:19:29.377 06:59:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:29.377 06:59:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:19:29.377 06:59:35 -- common/autotest_common.sh@1210 -- # return 0 00:19:29.377 06:59:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.377 06:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.377 06:59:35 -- common/autotest_common.sh@10 -- # set +x 00:19:29.377 06:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.377 06:59:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.377 06:59:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:29.377 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:29.377 06:59:36 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:29.377 06:59:36 -- common/autotest_common.sh@1198 -- # local i=0 00:19:29.377 06:59:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:29.377 06:59:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:19:29.377 06:59:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:19:29.377 06:59:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:29.377 06:59:36 -- common/autotest_common.sh@1210 -- # return 0 00:19:29.377 06:59:36 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:29.377 06:59:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.377 06:59:36 -- common/autotest_common.sh@10 -- # set +x 00:19:29.377 06:59:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.377 06:59:36 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.377 06:59:36 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:29.377 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:29.377 06:59:36 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:29.377 06:59:36 -- common/autotest_common.sh@1198 -- # local i=0 00:19:29.377 06:59:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:29.377 06:59:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:19:29.635 06:59:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:29.635 06:59:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:19:29.635 06:59:36 -- common/autotest_common.sh@1210 -- # return 0 00:19:29.635 06:59:36 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:29.635 06:59:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.635 06:59:36 -- common/autotest_common.sh@10 -- # set +x 00:19:29.635 06:59:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.635 06:59:36 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.635 06:59:36 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:29.635 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:29.635 06:59:36 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:29.635 06:59:36 -- common/autotest_common.sh@1198 -- # local i=0 00:19:29.635 06:59:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:29.635 06:59:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:19:29.635 06:59:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:29.635 06:59:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:19:29.635 06:59:36 -- common/autotest_common.sh@1210 -- # return 0 00:19:29.635 06:59:36 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:29.635 06:59:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.635 06:59:36 -- common/autotest_common.sh@10 -- # set +x 00:19:29.893 06:59:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.893 06:59:36 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.893 06:59:36 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:29.893 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:29.893 06:59:36 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:29.893 06:59:36 -- common/autotest_common.sh@1198 -- # local i=0 00:19:29.893 06:59:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:29.893 06:59:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:19:29.893 06:59:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:29.893 06:59:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:19:29.893 06:59:36 -- common/autotest_common.sh@1210 -- # return 0 00:19:29.893 06:59:36 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:29.893 06:59:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.893 06:59:36 -- common/autotest_common.sh@10 -- # set +x 00:19:29.893 06:59:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.893 06:59:36 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.893 06:59:36 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:30.153 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:30.153 06:59:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:30.153 06:59:37 -- common/autotest_common.sh@1198 -- # local i=0 00:19:30.153 06:59:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:30.153 06:59:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:19:30.153 06:59:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:30.153 06:59:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:19:30.153 06:59:37 -- common/autotest_common.sh@1210 -- # return 0 00:19:30.153 06:59:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:30.153 06:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.153 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:19:30.153 06:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.153 06:59:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.153 06:59:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:30.412 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:30.412 06:59:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:30.412 06:59:37 -- common/autotest_common.sh@1198 -- # local i=0 00:19:30.412 06:59:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:30.412 06:59:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:19:30.412 06:59:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:30.412 06:59:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:19:30.412 06:59:37 -- common/autotest_common.sh@1210 -- # return 0 00:19:30.412 06:59:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:30.412 06:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.412 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:19:30.412 06:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.412 06:59:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.412 06:59:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:30.412 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:30.412 06:59:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:30.412 06:59:37 -- common/autotest_common.sh@1198 -- # local i=0 00:19:30.412 06:59:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:30.412 06:59:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:19:30.672 06:59:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:30.672 06:59:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:19:30.672 06:59:37 -- common/autotest_common.sh@1210 -- # return 0 00:19:30.672 06:59:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:30.672 06:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.672 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:19:30.672 06:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.672 06:59:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.672 06:59:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:30.672 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:30.672 06:59:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:30.672 06:59:37 -- common/autotest_common.sh@1198 -- # local i=0 00:19:30.672 06:59:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:30.672 06:59:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:19:30.672 06:59:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:30.672 06:59:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:19:30.672 06:59:37 -- common/autotest_common.sh@1210 -- # return 0 00:19:30.672 06:59:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:30.672 06:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.672 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:19:30.672 06:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.672 06:59:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.672 06:59:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:30.930 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:30.930 06:59:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:30.930 06:59:37 -- common/autotest_common.sh@1198 -- # local i=0 00:19:30.930 06:59:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:30.930 06:59:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:19:30.930 06:59:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:30.930 06:59:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:19:30.930 06:59:37 -- common/autotest_common.sh@1210 -- # return 0 00:19:30.930 06:59:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:30.931 06:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.931 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:19:30.931 06:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.931 06:59:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.931 06:59:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:30.931 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:30.931 06:59:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:30.931 06:59:37 -- common/autotest_common.sh@1198 -- # local i=0 00:19:30.931 06:59:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:30.931 06:59:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:19:30.931 06:59:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:30.931 06:59:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:19:30.931 06:59:37 -- common/autotest_common.sh@1210 -- # return 0 00:19:30.931 06:59:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:30.931 06:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:30.931 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:19:30.931 06:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:30.931 06:59:37 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:30.931 06:59:37 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:30.931 06:59:37 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:30.931 06:59:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:30.931 06:59:37 -- nvmf/common.sh@116 -- # sync 00:19:30.931 06:59:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:30.931 06:59:37 -- nvmf/common.sh@119 -- # set +e 00:19:30.931 06:59:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:30.931 06:59:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:30.931 rmmod nvme_tcp 00:19:30.931 rmmod nvme_fabrics 00:19:30.931 rmmod nvme_keyring 00:19:30.931 06:59:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:30.931 06:59:38 -- nvmf/common.sh@123 -- # set -e 00:19:30.931 06:59:38 -- nvmf/common.sh@124 -- # return 0 00:19:30.931 06:59:38 -- nvmf/common.sh@477 -- # '[' -n 3062428 ']' 00:19:30.931 06:59:38 -- nvmf/common.sh@478 -- # killprocess 3062428 00:19:30.931 06:59:38 -- common/autotest_common.sh@926 -- # '[' -z 3062428 ']' 00:19:30.931 06:59:38 -- common/autotest_common.sh@930 -- # kill -0 3062428 00:19:30.931 06:59:38 -- common/autotest_common.sh@931 -- # uname 00:19:30.931 06:59:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:30.931 06:59:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3062428 00:19:30.931 06:59:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:30.931 06:59:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:30.931 06:59:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3062428' 00:19:30.931 killing process with pid 3062428 00:19:30.931 06:59:38 -- common/autotest_common.sh@945 -- # kill 3062428 00:19:30.931 06:59:38 -- common/autotest_common.sh@950 -- # wait 3062428 00:19:31.497 06:59:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:31.497 06:59:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:31.497 06:59:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:31.497 06:59:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.497 06:59:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:31.497 06:59:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.497 06:59:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.497 06:59:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.033 06:59:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:34.033 00:19:34.033 real 1m0.569s 00:19:34.033 user 3m12.012s 00:19:34.033 sys 0m26.394s 00:19:34.033 06:59:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.033 06:59:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.033 ************************************ 00:19:34.033 END TEST nvmf_multiconnection 00:19:34.033 ************************************ 00:19:34.033 06:59:40 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:34.033 06:59:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:34.033 06:59:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:34.033 06:59:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.033 ************************************ 00:19:34.033 START TEST nvmf_initiator_timeout 00:19:34.033 ************************************ 00:19:34.033 06:59:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:34.033 * Looking for test storage... 00:19:34.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.033 06:59:40 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.033 06:59:40 -- nvmf/common.sh@7 -- # uname -s 00:19:34.033 06:59:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.033 06:59:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.033 06:59:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.033 06:59:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.033 06:59:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.033 06:59:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.033 06:59:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.033 06:59:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.033 06:59:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.033 06:59:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.033 06:59:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.033 06:59:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.033 06:59:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.033 06:59:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.033 06:59:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.033 06:59:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.033 06:59:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.033 06:59:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.033 06:59:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.033 06:59:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.034 06:59:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.034 06:59:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.034 06:59:40 -- paths/export.sh@5 -- # export PATH 00:19:34.034 06:59:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.034 06:59:40 -- nvmf/common.sh@46 -- # : 0 00:19:34.034 06:59:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:34.034 06:59:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:34.034 06:59:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:34.034 06:59:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.034 06:59:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.034 06:59:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:34.034 06:59:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:34.034 06:59:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:34.034 06:59:40 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:34.034 06:59:40 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:34.034 06:59:40 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:34.034 06:59:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:34.034 06:59:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.034 06:59:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:34.034 06:59:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:34.034 06:59:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:34.034 06:59:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.034 06:59:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.034 06:59:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.034 06:59:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:34.034 06:59:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:34.034 06:59:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:34.034 06:59:40 -- common/autotest_common.sh@10 -- # set +x 00:19:35.934 06:59:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:35.934 06:59:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:35.934 06:59:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:35.935 06:59:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:35.935 06:59:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:35.935 06:59:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:35.935 06:59:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:35.935 06:59:42 -- nvmf/common.sh@294 -- # net_devs=() 00:19:35.935 06:59:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:35.935 06:59:42 -- nvmf/common.sh@295 -- # e810=() 00:19:35.935 06:59:42 -- nvmf/common.sh@295 -- # local -ga e810 00:19:35.935 06:59:42 -- nvmf/common.sh@296 -- # x722=() 00:19:35.935 06:59:42 -- nvmf/common.sh@296 -- # local -ga x722 00:19:35.935 06:59:42 -- nvmf/common.sh@297 -- # mlx=() 00:19:35.935 06:59:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:35.935 06:59:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.935 06:59:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.935 06:59:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.935 06:59:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.935 06:59:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.935 06:59:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.935 06:59:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.935 06:59:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.935 06:59:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.935 06:59:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.935 06:59:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.935 06:59:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:35.935 06:59:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:35.935 06:59:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:35.935 06:59:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:35.935 06:59:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:35.935 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:35.935 06:59:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:35.935 06:59:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:35.935 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:35.935 06:59:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:35.935 06:59:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:35.935 06:59:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.935 06:59:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:35.935 06:59:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.935 06:59:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:35.935 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:35.935 06:59:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.935 06:59:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:35.935 06:59:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.935 06:59:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:35.935 06:59:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.935 06:59:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:35.935 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:35.935 06:59:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.935 06:59:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:35.935 06:59:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:35.935 06:59:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:35.935 06:59:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.935 06:59:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.935 06:59:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.935 06:59:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:35.935 06:59:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.935 06:59:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.935 06:59:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:35.935 06:59:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.935 06:59:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.935 06:59:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:35.935 06:59:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:35.935 06:59:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.935 06:59:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.935 06:59:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.935 06:59:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.935 06:59:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:35.935 06:59:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.935 06:59:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.935 06:59:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.935 06:59:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:35.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:19:35.935 00:19:35.935 --- 10.0.0.2 ping statistics --- 00:19:35.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.935 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:19:35.935 06:59:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:19:35.935 00:19:35.935 --- 10.0.0.1 ping statistics --- 00:19:35.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.935 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:19:35.935 06:59:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.935 06:59:42 -- nvmf/common.sh@410 -- # return 0 00:19:35.935 06:59:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:35.935 06:59:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.935 06:59:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:35.935 06:59:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.935 06:59:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:35.935 06:59:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:35.935 06:59:42 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:35.935 06:59:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:35.935 06:59:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:35.935 06:59:42 -- common/autotest_common.sh@10 -- # set +x 00:19:35.935 06:59:42 -- nvmf/common.sh@469 -- # nvmfpid=3071254 00:19:35.935 06:59:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:35.935 06:59:42 -- nvmf/common.sh@470 -- # waitforlisten 3071254 00:19:35.935 06:59:42 -- common/autotest_common.sh@819 -- # '[' -z 3071254 ']' 00:19:35.935 06:59:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.935 06:59:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:35.935 06:59:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.935 06:59:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:35.935 06:59:42 -- common/autotest_common.sh@10 -- # set +x 00:19:35.935 [2024-05-12 06:59:42.885132] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:35.935 [2024-05-12 06:59:42.885202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.935 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.935 [2024-05-12 06:59:42.950942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.935 [2024-05-12 06:59:43.057421] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:35.935 [2024-05-12 06:59:43.057553] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.935 [2024-05-12 06:59:43.057569] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.935 [2024-05-12 06:59:43.057581] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.935 [2024-05-12 06:59:43.057634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.935 [2024-05-12 06:59:43.057692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.935 [2024-05-12 06:59:43.057759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.935 [2024-05-12 06:59:43.057764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.870 06:59:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:36.870 06:59:43 -- common/autotest_common.sh@852 -- # return 0 00:19:36.870 06:59:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:36.870 06:59:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:36.870 06:59:43 -- common/autotest_common.sh@10 -- # set +x 00:19:36.870 06:59:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.870 06:59:43 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:36.870 06:59:43 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:36.870 06:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.870 06:59:43 -- common/autotest_common.sh@10 -- # set +x 00:19:36.870 Malloc0 00:19:36.870 06:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.870 06:59:43 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:36.870 06:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.870 06:59:43 -- common/autotest_common.sh@10 -- # set +x 00:19:36.870 Delay0 00:19:36.870 06:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.870 06:59:43 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:36.870 06:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.870 06:59:43 -- common/autotest_common.sh@10 -- # set +x 00:19:36.870 [2024-05-12 06:59:43.911181] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.870 06:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.870 06:59:43 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:36.870 06:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.870 06:59:43 -- common/autotest_common.sh@10 -- # set +x 00:19:36.870 06:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.870 06:59:43 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:36.870 06:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.870 06:59:43 -- common/autotest_common.sh@10 -- # set +x 00:19:36.870 06:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.870 06:59:43 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.870 06:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.870 06:59:43 -- common/autotest_common.sh@10 -- # set +x 00:19:36.870 [2024-05-12 06:59:43.939425] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.870 06:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.870 06:59:43 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:37.805 06:59:44 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:37.805 06:59:44 -- common/autotest_common.sh@1177 -- # local i=0 00:19:37.805 06:59:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:37.805 06:59:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:37.805 06:59:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:39.710 06:59:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:39.710 06:59:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:39.710 06:59:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:39.710 06:59:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:39.710 06:59:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:39.710 06:59:46 -- common/autotest_common.sh@1187 -- # return 0 00:19:39.710 06:59:46 -- target/initiator_timeout.sh@35 -- # fio_pid=3071704 00:19:39.710 06:59:46 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:39.710 06:59:46 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:39.710 [global] 00:19:39.710 thread=1 00:19:39.710 invalidate=1 00:19:39.710 rw=write 00:19:39.710 time_based=1 00:19:39.710 runtime=60 00:19:39.710 ioengine=libaio 00:19:39.710 direct=1 00:19:39.710 bs=4096 00:19:39.710 iodepth=1 00:19:39.710 norandommap=0 00:19:39.710 numjobs=1 00:19:39.710 00:19:39.710 verify_dump=1 00:19:39.710 verify_backlog=512 00:19:39.710 verify_state_save=0 00:19:39.710 do_verify=1 00:19:39.710 verify=crc32c-intel 00:19:39.710 [job0] 00:19:39.710 filename=/dev/nvme0n1 00:19:39.710 Could not set queue depth (nvme0n1) 00:19:39.710 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:39.710 fio-3.35 00:19:39.710 Starting 1 thread 00:19:42.998 06:59:49 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:42.998 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:42.998 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:19:42.998 true 00:19:42.998 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:42.998 06:59:49 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:42.998 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:42.998 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:19:42.998 true 00:19:42.998 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:42.998 06:59:49 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:42.998 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:42.998 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:19:42.998 true 00:19:42.998 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:42.998 06:59:49 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:42.998 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:42.998 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:19:42.998 true 00:19:42.999 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:42.999 06:59:49 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:45.547 06:59:52 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:45.547 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.547 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:19:45.547 true 00:19:45.547 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.547 06:59:52 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:45.547 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.547 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:19:45.547 true 00:19:45.547 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.547 06:59:52 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:45.547 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.547 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:19:45.547 true 00:19:45.547 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.547 06:59:52 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:45.547 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.547 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:19:45.808 true 00:19:45.808 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.808 06:59:52 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:45.808 06:59:52 -- target/initiator_timeout.sh@54 -- # wait 3071704 00:20:42.041 00:20:42.041 job0: (groupid=0, jobs=1): err= 0: pid=3071774: Sun May 12 07:00:46 2024 00:20:42.041 read: IOPS=103, BW=412KiB/s (422kB/s)(24.2MiB/60001msec) 00:20:42.041 slat (nsec): min=5467, max=71010, avg=16693.36, stdev=8796.13 00:20:42.041 clat (usec): min=361, max=41044k, avg=9302.89, stdev=521855.53 00:20:42.041 lat (usec): min=369, max=41044k, avg=9319.58, stdev=521855.45 00:20:42.041 clat percentiles (usec): 00:20:42.041 | 1.00th=[ 388], 5.00th=[ 420], 10.00th=[ 437], 00:20:42.041 | 20.00th=[ 457], 30.00th=[ 474], 40.00th=[ 486], 00:20:42.041 | 50.00th=[ 498], 60.00th=[ 510], 70.00th=[ 523], 00:20:42.041 | 80.00th=[ 537], 90.00th=[ 578], 95.00th=[ 41157], 00:20:42.041 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:20:42.041 | 99.95th=[ 42206], 99.99th=[17112761] 00:20:42.041 write: IOPS=110, BW=444KiB/s (454kB/s)(26.0MiB/60001msec); 0 zone resets 00:20:42.041 slat (usec): min=6, max=11665, avg=23.23, stdev=171.11 00:20:42.041 clat (usec): min=229, max=513, avg=318.38, stdev=56.46 00:20:42.041 lat (usec): min=236, max=12010, avg=341.61, stdev=183.55 00:20:42.041 clat percentiles (usec): 00:20:42.041 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 255], 20.00th=[ 273], 00:20:42.041 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 318], 00:20:42.041 | 70.00th=[ 334], 80.00th=[ 363], 90.00th=[ 404], 95.00th=[ 429], 00:20:42.041 | 99.00th=[ 482], 99.50th=[ 490], 99.90th=[ 510], 99.95th=[ 510], 00:20:42.041 | 99.99th=[ 515] 00:20:42.041 bw ( KiB/s): min= 872, max= 6896, per=100.00%, avg=4096.00, stdev=1715.34, samples=12 00:20:42.041 iops : min= 218, max= 1724, avg=1024.00, stdev=428.84, samples=12 00:20:42.041 lat (usec) : 250=3.30%, 500=74.38%, 750=19.72%, 1000=0.05% 00:20:42.041 lat (msec) : 2=0.02%, 50=2.53%, >=2000=0.01% 00:20:42.041 cpu : usr=0.29%, sys=0.55%, ctx=12845, majf=0, minf=2 00:20:42.041 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:42.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.041 issued rwts: total=6187,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.041 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:42.041 00:20:42.041 Run status group 0 (all jobs): 00:20:42.041 READ: bw=412KiB/s (422kB/s), 412KiB/s-412KiB/s (422kB/s-422kB/s), io=24.2MiB (25.3MB), run=60001-60001msec 00:20:42.041 WRITE: bw=444KiB/s (454kB/s), 444KiB/s-444KiB/s (454kB/s-454kB/s), io=26.0MiB (27.3MB), run=60001-60001msec 00:20:42.041 00:20:42.041 Disk stats (read/write): 00:20:42.041 nvme0n1: ios=6243/6460, merge=0/0, ticks=16814/1902, in_queue=18716, util=99.82% 00:20:42.041 07:00:46 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:42.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:42.041 07:00:46 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:42.041 07:00:46 -- common/autotest_common.sh@1198 -- # local i=0 00:20:42.041 07:00:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:42.041 07:00:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:42.041 07:00:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:42.041 07:00:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:42.041 07:00:47 -- common/autotest_common.sh@1210 -- # return 0 00:20:42.041 07:00:47 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:42.041 07:00:47 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:42.041 nvmf hotplug test: fio successful as expected 00:20:42.041 07:00:47 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.041 07:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:42.041 07:00:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.041 07:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:42.041 07:00:47 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:42.041 07:00:47 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:42.041 07:00:47 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:42.041 07:00:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:42.041 07:00:47 -- nvmf/common.sh@116 -- # sync 00:20:42.041 07:00:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:42.041 07:00:47 -- nvmf/common.sh@119 -- # set +e 00:20:42.041 07:00:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:42.041 07:00:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:42.041 rmmod nvme_tcp 00:20:42.041 rmmod nvme_fabrics 00:20:42.041 rmmod nvme_keyring 00:20:42.041 07:00:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:42.041 07:00:47 -- nvmf/common.sh@123 -- # set -e 00:20:42.041 07:00:47 -- nvmf/common.sh@124 -- # return 0 00:20:42.041 07:00:47 -- nvmf/common.sh@477 -- # '[' -n 3071254 ']' 00:20:42.041 07:00:47 -- nvmf/common.sh@478 -- # killprocess 3071254 00:20:42.041 07:00:47 -- common/autotest_common.sh@926 -- # '[' -z 3071254 ']' 00:20:42.041 07:00:47 -- common/autotest_common.sh@930 -- # kill -0 3071254 00:20:42.041 07:00:47 -- common/autotest_common.sh@931 -- # uname 00:20:42.041 07:00:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:42.041 07:00:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3071254 00:20:42.041 07:00:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:42.041 07:00:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:42.041 07:00:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3071254' 00:20:42.041 killing process with pid 3071254 00:20:42.041 07:00:47 -- common/autotest_common.sh@945 -- # kill 3071254 00:20:42.041 07:00:47 -- common/autotest_common.sh@950 -- # wait 3071254 00:20:42.041 07:00:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:42.041 07:00:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:42.041 07:00:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:42.041 07:00:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.041 07:00:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:42.041 07:00:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.041 07:00:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.041 07:00:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.300 07:00:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:42.300 00:20:42.300 real 1m8.747s 00:20:42.300 user 4m10.211s 00:20:42.300 sys 0m7.779s 00:20:42.300 07:00:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.300 07:00:49 -- common/autotest_common.sh@10 -- # set +x 00:20:42.300 ************************************ 00:20:42.300 END TEST nvmf_initiator_timeout 00:20:42.300 ************************************ 00:20:42.558 07:00:49 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:20:42.558 07:00:49 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:20:42.558 07:00:49 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:20:42.558 07:00:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:42.558 07:00:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.455 07:00:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:44.455 07:00:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:44.455 07:00:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:44.455 07:00:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:44.455 07:00:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:44.455 07:00:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:44.455 07:00:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:44.455 07:00:51 -- nvmf/common.sh@294 -- # net_devs=() 00:20:44.455 07:00:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:44.455 07:00:51 -- nvmf/common.sh@295 -- # e810=() 00:20:44.455 07:00:51 -- nvmf/common.sh@295 -- # local -ga e810 00:20:44.455 07:00:51 -- nvmf/common.sh@296 -- # x722=() 00:20:44.455 07:00:51 -- nvmf/common.sh@296 -- # local -ga x722 00:20:44.455 07:00:51 -- nvmf/common.sh@297 -- # mlx=() 00:20:44.455 07:00:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:44.455 07:00:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.455 07:00:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.455 07:00:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.455 07:00:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.455 07:00:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.455 07:00:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.455 07:00:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.455 07:00:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.455 07:00:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.455 07:00:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.455 07:00:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.455 07:00:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:44.455 07:00:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:44.455 07:00:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:44.455 07:00:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:44.455 07:00:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:44.455 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:44.455 07:00:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:44.455 07:00:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:44.455 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:44.455 07:00:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:44.455 07:00:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:44.455 07:00:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.455 07:00:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:44.455 07:00:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.455 07:00:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:44.455 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:44.455 07:00:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.455 07:00:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:44.455 07:00:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.455 07:00:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:44.455 07:00:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.455 07:00:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:44.455 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:44.455 07:00:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.455 07:00:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:44.455 07:00:51 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.455 07:00:51 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:20:44.455 07:00:51 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:44.455 07:00:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:44.455 07:00:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:44.455 07:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:44.455 ************************************ 00:20:44.455 START TEST nvmf_perf_adq 00:20:44.455 ************************************ 00:20:44.455 07:00:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:44.455 * Looking for test storage... 00:20:44.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:44.455 07:00:51 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.455 07:00:51 -- nvmf/common.sh@7 -- # uname -s 00:20:44.455 07:00:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.455 07:00:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.455 07:00:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.455 07:00:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.455 07:00:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.455 07:00:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.455 07:00:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.455 07:00:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.455 07:00:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.455 07:00:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.455 07:00:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.455 07:00:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.455 07:00:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.455 07:00:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.455 07:00:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.455 07:00:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.455 07:00:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.455 07:00:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.455 07:00:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.455 07:00:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.455 07:00:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.455 07:00:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.455 07:00:51 -- paths/export.sh@5 -- # export PATH 00:20:44.455 07:00:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.455 07:00:51 -- nvmf/common.sh@46 -- # : 0 00:20:44.455 07:00:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:44.455 07:00:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:44.455 07:00:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:44.455 07:00:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.455 07:00:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.455 07:00:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:44.455 07:00:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:44.455 07:00:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:44.455 07:00:51 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:44.455 07:00:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:44.455 07:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:46.351 07:00:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:46.351 07:00:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:46.351 07:00:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:46.351 07:00:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:46.351 07:00:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:46.351 07:00:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:46.351 07:00:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:46.351 07:00:53 -- nvmf/common.sh@294 -- # net_devs=() 00:20:46.351 07:00:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:46.351 07:00:53 -- nvmf/common.sh@295 -- # e810=() 00:20:46.351 07:00:53 -- nvmf/common.sh@295 -- # local -ga e810 00:20:46.351 07:00:53 -- nvmf/common.sh@296 -- # x722=() 00:20:46.351 07:00:53 -- nvmf/common.sh@296 -- # local -ga x722 00:20:46.351 07:00:53 -- nvmf/common.sh@297 -- # mlx=() 00:20:46.351 07:00:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:46.351 07:00:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.351 07:00:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.351 07:00:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.351 07:00:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.351 07:00:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.351 07:00:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.351 07:00:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.351 07:00:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.351 07:00:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.351 07:00:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.351 07:00:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.351 07:00:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:46.351 07:00:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:46.351 07:00:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:46.351 07:00:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:46.351 07:00:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:46.351 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:46.351 07:00:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:46.351 07:00:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:46.351 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:46.351 07:00:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:46.351 07:00:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:46.351 07:00:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:46.351 07:00:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.351 07:00:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:46.351 07:00:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.351 07:00:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:46.351 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:46.351 07:00:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.351 07:00:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:46.351 07:00:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.351 07:00:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:46.351 07:00:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.351 07:00:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:46.351 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:46.351 07:00:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.351 07:00:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:46.351 07:00:53 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.351 07:00:53 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:46.351 07:00:53 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:46.351 07:00:53 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:20:46.351 07:00:53 -- target/perf_adq.sh@52 -- # rmmod ice 00:20:47.293 07:00:54 -- target/perf_adq.sh@53 -- # modprobe ice 00:20:48.672 07:00:55 -- target/perf_adq.sh@54 -- # sleep 5 00:20:53.954 07:01:00 -- target/perf_adq.sh@67 -- # nvmftestinit 00:20:53.954 07:01:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:53.954 07:01:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.954 07:01:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:53.954 07:01:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:53.954 07:01:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:53.954 07:01:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.954 07:01:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.954 07:01:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.954 07:01:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:53.954 07:01:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:53.954 07:01:00 -- common/autotest_common.sh@10 -- # set +x 00:20:53.954 07:01:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:53.954 07:01:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:53.954 07:01:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:53.954 07:01:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:53.954 07:01:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:53.954 07:01:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:53.954 07:01:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:53.954 07:01:00 -- nvmf/common.sh@294 -- # net_devs=() 00:20:53.954 07:01:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:53.954 07:01:00 -- nvmf/common.sh@295 -- # e810=() 00:20:53.954 07:01:00 -- nvmf/common.sh@295 -- # local -ga e810 00:20:53.954 07:01:00 -- nvmf/common.sh@296 -- # x722=() 00:20:53.954 07:01:00 -- nvmf/common.sh@296 -- # local -ga x722 00:20:53.954 07:01:00 -- nvmf/common.sh@297 -- # mlx=() 00:20:53.954 07:01:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:53.954 07:01:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.954 07:01:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.954 07:01:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.954 07:01:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.954 07:01:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.954 07:01:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.954 07:01:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.954 07:01:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.954 07:01:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.954 07:01:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.954 07:01:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.954 07:01:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:53.954 07:01:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:53.954 07:01:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:53.954 07:01:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:53.954 07:01:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:53.954 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:53.954 07:01:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:53.954 07:01:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:53.954 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:53.954 07:01:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:53.954 07:01:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:53.954 07:01:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.954 07:01:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:53.954 07:01:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.954 07:01:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:53.954 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:53.954 07:01:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.954 07:01:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:53.954 07:01:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.954 07:01:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:53.954 07:01:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.954 07:01:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:53.954 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:53.954 07:01:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.954 07:01:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:53.954 07:01:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:53.954 07:01:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:53.954 07:01:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:53.954 07:01:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.954 07:01:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.954 07:01:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.954 07:01:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:53.955 07:01:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.955 07:01:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.955 07:01:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:53.955 07:01:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.955 07:01:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.955 07:01:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:53.955 07:01:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:53.955 07:01:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.955 07:01:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.955 07:01:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.955 07:01:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.955 07:01:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:53.955 07:01:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.955 07:01:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.955 07:01:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.955 07:01:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:53.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:20:53.955 00:20:53.955 --- 10.0.0.2 ping statistics --- 00:20:53.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.955 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:20:53.955 07:01:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:20:53.955 00:20:53.955 --- 10.0.0.1 ping statistics --- 00:20:53.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.955 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:20:53.955 07:01:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.955 07:01:00 -- nvmf/common.sh@410 -- # return 0 00:20:53.955 07:01:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:53.955 07:01:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.955 07:01:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:53.955 07:01:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:53.955 07:01:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.955 07:01:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:53.955 07:01:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:53.955 07:01:00 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:53.955 07:01:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:53.955 07:01:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:53.955 07:01:00 -- common/autotest_common.sh@10 -- # set +x 00:20:53.955 07:01:00 -- nvmf/common.sh@469 -- # nvmfpid=3084192 00:20:53.955 07:01:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:53.955 07:01:00 -- nvmf/common.sh@470 -- # waitforlisten 3084192 00:20:53.955 07:01:00 -- common/autotest_common.sh@819 -- # '[' -z 3084192 ']' 00:20:53.955 07:01:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.955 07:01:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:53.955 07:01:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.955 07:01:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:53.955 07:01:00 -- common/autotest_common.sh@10 -- # set +x 00:20:53.955 [2024-05-12 07:01:00.866087] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:53.955 [2024-05-12 07:01:00.866158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.955 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.955 [2024-05-12 07:01:00.935585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.955 [2024-05-12 07:01:01.054443] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:53.955 [2024-05-12 07:01:01.054605] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.955 [2024-05-12 07:01:01.054622] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.955 [2024-05-12 07:01:01.054634] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.955 [2024-05-12 07:01:01.057723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.955 [2024-05-12 07:01:01.057762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.955 [2024-05-12 07:01:01.057840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.955 [2024-05-12 07:01:01.057843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.213 07:01:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:54.213 07:01:01 -- common/autotest_common.sh@852 -- # return 0 00:20:54.213 07:01:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:54.213 07:01:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:54.213 07:01:01 -- common/autotest_common.sh@10 -- # set +x 00:20:54.213 07:01:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.213 07:01:01 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:20:54.213 07:01:01 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:54.213 07:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:54.213 07:01:01 -- common/autotest_common.sh@10 -- # set +x 00:20:54.213 07:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:54.213 07:01:01 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:20:54.213 07:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:54.213 07:01:01 -- common/autotest_common.sh@10 -- # set +x 00:20:54.213 07:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:54.213 07:01:01 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:54.213 07:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:54.213 07:01:01 -- common/autotest_common.sh@10 -- # set +x 00:20:54.213 [2024-05-12 07:01:01.277463] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.213 07:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:54.213 07:01:01 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:54.213 07:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:54.213 07:01:01 -- common/autotest_common.sh@10 -- # set +x 00:20:54.213 Malloc1 00:20:54.213 07:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:54.213 07:01:01 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:54.213 07:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:54.213 07:01:01 -- common/autotest_common.sh@10 -- # set +x 00:20:54.213 07:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:54.213 07:01:01 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:54.213 07:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:54.213 07:01:01 -- common/autotest_common.sh@10 -- # set +x 00:20:54.213 07:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:54.213 07:01:01 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:54.213 07:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:54.213 07:01:01 -- common/autotest_common.sh@10 -- # set +x 00:20:54.213 [2024-05-12 07:01:01.329532] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.213 07:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:54.213 07:01:01 -- target/perf_adq.sh@73 -- # perfpid=3084277 00:20:54.213 07:01:01 -- target/perf_adq.sh@74 -- # sleep 2 00:20:54.213 07:01:01 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:54.472 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.375 07:01:03 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:20:56.375 07:01:03 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:56.375 07:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:56.375 07:01:03 -- target/perf_adq.sh@76 -- # wc -l 00:20:56.375 07:01:03 -- common/autotest_common.sh@10 -- # set +x 00:20:56.375 07:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:56.375 07:01:03 -- target/perf_adq.sh@76 -- # count=4 00:20:56.375 07:01:03 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:20:56.375 07:01:03 -- target/perf_adq.sh@81 -- # wait 3084277 00:21:04.489 Initializing NVMe Controllers 00:21:04.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:04.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:04.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:04.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:04.490 Initialization complete. Launching workers. 00:21:04.490 ======================================================== 00:21:04.490 Latency(us) 00:21:04.490 Device Information : IOPS MiB/s Average min max 00:21:04.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10789.50 42.15 5932.25 1044.14 8697.48 00:21:04.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11384.80 44.47 5622.19 1223.33 9368.84 00:21:04.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11417.70 44.60 5606.73 1039.01 9406.59 00:21:04.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8804.60 34.39 7270.46 1436.79 12593.17 00:21:04.490 ======================================================== 00:21:04.490 Total : 42396.60 165.61 6039.23 1039.01 12593.17 00:21:04.490 00:21:04.490 07:01:11 -- target/perf_adq.sh@82 -- # nvmftestfini 00:21:04.490 07:01:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:04.490 07:01:11 -- nvmf/common.sh@116 -- # sync 00:21:04.490 07:01:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:04.490 07:01:11 -- nvmf/common.sh@119 -- # set +e 00:21:04.490 07:01:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:04.490 07:01:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:04.490 rmmod nvme_tcp 00:21:04.490 rmmod nvme_fabrics 00:21:04.490 rmmod nvme_keyring 00:21:04.490 07:01:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:04.490 07:01:11 -- nvmf/common.sh@123 -- # set -e 00:21:04.490 07:01:11 -- nvmf/common.sh@124 -- # return 0 00:21:04.490 07:01:11 -- nvmf/common.sh@477 -- # '[' -n 3084192 ']' 00:21:04.490 07:01:11 -- nvmf/common.sh@478 -- # killprocess 3084192 00:21:04.490 07:01:11 -- common/autotest_common.sh@926 -- # '[' -z 3084192 ']' 00:21:04.490 07:01:11 -- common/autotest_common.sh@930 -- # kill -0 3084192 00:21:04.490 07:01:11 -- common/autotest_common.sh@931 -- # uname 00:21:04.490 07:01:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:04.490 07:01:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3084192 00:21:04.490 07:01:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:04.490 07:01:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:04.490 07:01:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3084192' 00:21:04.490 killing process with pid 3084192 00:21:04.490 07:01:11 -- common/autotest_common.sh@945 -- # kill 3084192 00:21:04.490 07:01:11 -- common/autotest_common.sh@950 -- # wait 3084192 00:21:05.057 07:01:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:05.057 07:01:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:05.057 07:01:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:05.057 07:01:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.057 07:01:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:05.057 07:01:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.057 07:01:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.057 07:01:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.957 07:01:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:06.957 07:01:13 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:21:06.957 07:01:13 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:07.532 07:01:14 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:08.916 07:01:15 -- target/perf_adq.sh@54 -- # sleep 5 00:21:14.194 07:01:20 -- target/perf_adq.sh@87 -- # nvmftestinit 00:21:14.194 07:01:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:14.194 07:01:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.194 07:01:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:14.194 07:01:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:14.194 07:01:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:14.194 07:01:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.194 07:01:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.194 07:01:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.194 07:01:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:14.194 07:01:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:14.194 07:01:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:14.194 07:01:20 -- common/autotest_common.sh@10 -- # set +x 00:21:14.194 07:01:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:14.194 07:01:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:14.194 07:01:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:14.194 07:01:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:14.194 07:01:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:14.194 07:01:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:14.194 07:01:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:14.194 07:01:20 -- nvmf/common.sh@294 -- # net_devs=() 00:21:14.194 07:01:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:14.194 07:01:20 -- nvmf/common.sh@295 -- # e810=() 00:21:14.194 07:01:20 -- nvmf/common.sh@295 -- # local -ga e810 00:21:14.194 07:01:20 -- nvmf/common.sh@296 -- # x722=() 00:21:14.194 07:01:20 -- nvmf/common.sh@296 -- # local -ga x722 00:21:14.194 07:01:20 -- nvmf/common.sh@297 -- # mlx=() 00:21:14.194 07:01:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:14.194 07:01:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.194 07:01:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.194 07:01:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.194 07:01:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.194 07:01:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.194 07:01:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.194 07:01:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.194 07:01:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.194 07:01:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.194 07:01:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.194 07:01:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.194 07:01:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:14.194 07:01:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:14.194 07:01:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:14.194 07:01:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:14.194 07:01:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:14.194 07:01:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:14.194 07:01:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:14.195 07:01:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:14.195 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:14.195 07:01:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:14.195 07:01:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:14.195 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:14.195 07:01:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:14.195 07:01:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:14.195 07:01:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.195 07:01:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:14.195 07:01:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.195 07:01:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:14.195 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:14.195 07:01:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.195 07:01:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:14.195 07:01:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.195 07:01:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:14.195 07:01:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.195 07:01:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:14.195 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:14.195 07:01:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.195 07:01:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:14.195 07:01:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:14.195 07:01:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:14.195 07:01:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:14.195 07:01:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.195 07:01:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.195 07:01:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.195 07:01:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:14.195 07:01:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.195 07:01:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.195 07:01:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:14.195 07:01:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.195 07:01:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.195 07:01:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:14.195 07:01:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:14.195 07:01:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.195 07:01:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.195 07:01:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.195 07:01:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.195 07:01:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:14.195 07:01:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.195 07:01:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.195 07:01:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.195 07:01:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:14.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:21:14.195 00:21:14.195 --- 10.0.0.2 ping statistics --- 00:21:14.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.195 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:21:14.195 07:01:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:21:14.195 00:21:14.195 --- 10.0.0.1 ping statistics --- 00:21:14.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.195 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:21:14.195 07:01:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.195 07:01:21 -- nvmf/common.sh@410 -- # return 0 00:21:14.195 07:01:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:14.195 07:01:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.195 07:01:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:14.195 07:01:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:14.195 07:01:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.195 07:01:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:14.195 07:01:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:14.195 07:01:21 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:21:14.195 07:01:21 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:14.195 07:01:21 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:14.195 07:01:21 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:14.195 net.core.busy_poll = 1 00:21:14.195 07:01:21 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:14.195 net.core.busy_read = 1 00:21:14.195 07:01:21 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:14.195 07:01:21 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:14.195 07:01:21 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:14.195 07:01:21 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:14.195 07:01:21 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:14.195 07:01:21 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:14.195 07:01:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:14.195 07:01:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:14.195 07:01:21 -- common/autotest_common.sh@10 -- # set +x 00:21:14.195 07:01:21 -- nvmf/common.sh@469 -- # nvmfpid=3086905 00:21:14.195 07:01:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:14.195 07:01:21 -- nvmf/common.sh@470 -- # waitforlisten 3086905 00:21:14.195 07:01:21 -- common/autotest_common.sh@819 -- # '[' -z 3086905 ']' 00:21:14.195 07:01:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.195 07:01:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:14.195 07:01:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.195 07:01:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:14.195 07:01:21 -- common/autotest_common.sh@10 -- # set +x 00:21:14.195 [2024-05-12 07:01:21.305931] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:14.195 [2024-05-12 07:01:21.306019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.454 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.454 [2024-05-12 07:01:21.371739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.454 [2024-05-12 07:01:21.482522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:14.454 [2024-05-12 07:01:21.482677] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.454 [2024-05-12 07:01:21.482694] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.454 [2024-05-12 07:01:21.482727] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.454 [2024-05-12 07:01:21.482787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.454 [2024-05-12 07:01:21.482848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.454 [2024-05-12 07:01:21.482873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.454 [2024-05-12 07:01:21.482876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.454 07:01:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:14.454 07:01:21 -- common/autotest_common.sh@852 -- # return 0 00:21:14.454 07:01:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:14.454 07:01:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:14.454 07:01:21 -- common/autotest_common.sh@10 -- # set +x 00:21:14.454 07:01:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.454 07:01:21 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:21:14.454 07:01:21 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:14.454 07:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.454 07:01:21 -- common/autotest_common.sh@10 -- # set +x 00:21:14.454 07:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.454 07:01:21 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:14.454 07:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.454 07:01:21 -- common/autotest_common.sh@10 -- # set +x 00:21:14.713 07:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.713 07:01:21 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:14.713 07:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.713 07:01:21 -- common/autotest_common.sh@10 -- # set +x 00:21:14.713 [2024-05-12 07:01:21.665600] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.713 07:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.713 07:01:21 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:14.713 07:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.713 07:01:21 -- common/autotest_common.sh@10 -- # set +x 00:21:14.713 Malloc1 00:21:14.713 07:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.713 07:01:21 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:14.713 07:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.713 07:01:21 -- common/autotest_common.sh@10 -- # set +x 00:21:14.713 07:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.713 07:01:21 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:14.713 07:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.713 07:01:21 -- common/autotest_common.sh@10 -- # set +x 00:21:14.713 07:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.713 07:01:21 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.713 07:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.713 07:01:21 -- common/autotest_common.sh@10 -- # set +x 00:21:14.713 [2024-05-12 07:01:21.719059] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.713 07:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.713 07:01:21 -- target/perf_adq.sh@94 -- # perfpid=3086935 00:21:14.713 07:01:21 -- target/perf_adq.sh@95 -- # sleep 2 00:21:14.713 07:01:21 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:14.713 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.679 07:01:23 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:21:16.680 07:01:23 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:16.680 07:01:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:16.680 07:01:23 -- target/perf_adq.sh@97 -- # wc -l 00:21:16.680 07:01:23 -- common/autotest_common.sh@10 -- # set +x 00:21:16.680 07:01:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:16.680 07:01:23 -- target/perf_adq.sh@97 -- # count=2 00:21:16.680 07:01:23 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:21:16.680 07:01:23 -- target/perf_adq.sh@103 -- # wait 3086935 00:21:24.797 Initializing NVMe Controllers 00:21:24.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:24.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:24.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:24.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:24.797 Initialization complete. Launching workers. 00:21:24.797 ======================================================== 00:21:24.798 Latency(us) 00:21:24.798 Device Information : IOPS MiB/s Average min max 00:21:24.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6684.10 26.11 9574.49 1641.51 56507.78 00:21:24.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7174.10 28.02 8924.39 1584.56 52603.62 00:21:24.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7016.70 27.41 9125.96 2484.69 54655.90 00:21:24.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5751.40 22.47 11149.18 1777.10 56280.87 00:21:24.798 ======================================================== 00:21:24.798 Total : 26626.29 104.01 9621.27 1584.56 56507.78 00:21:24.798 00:21:24.798 07:01:31 -- target/perf_adq.sh@104 -- # nvmftestfini 00:21:24.798 07:01:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:24.798 07:01:31 -- nvmf/common.sh@116 -- # sync 00:21:24.798 07:01:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:24.798 07:01:31 -- nvmf/common.sh@119 -- # set +e 00:21:24.798 07:01:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:24.798 07:01:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:24.798 rmmod nvme_tcp 00:21:24.798 rmmod nvme_fabrics 00:21:24.798 rmmod nvme_keyring 00:21:24.798 07:01:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:24.798 07:01:31 -- nvmf/common.sh@123 -- # set -e 00:21:24.798 07:01:31 -- nvmf/common.sh@124 -- # return 0 00:21:24.798 07:01:31 -- nvmf/common.sh@477 -- # '[' -n 3086905 ']' 00:21:24.798 07:01:31 -- nvmf/common.sh@478 -- # killprocess 3086905 00:21:24.798 07:01:31 -- common/autotest_common.sh@926 -- # '[' -z 3086905 ']' 00:21:24.798 07:01:31 -- common/autotest_common.sh@930 -- # kill -0 3086905 00:21:24.798 07:01:31 -- common/autotest_common.sh@931 -- # uname 00:21:24.798 07:01:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:24.798 07:01:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3086905 00:21:25.055 07:01:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:25.055 07:01:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:25.055 07:01:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3086905' 00:21:25.055 killing process with pid 3086905 00:21:25.055 07:01:31 -- common/autotest_common.sh@945 -- # kill 3086905 00:21:25.055 07:01:31 -- common/autotest_common.sh@950 -- # wait 3086905 00:21:25.313 07:01:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:25.313 07:01:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:25.313 07:01:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:25.313 07:01:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.313 07:01:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:25.313 07:01:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.313 07:01:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.313 07:01:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.212 07:01:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:27.212 07:01:34 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:21:27.212 00:21:27.212 real 0m42.904s 00:21:27.212 user 2m27.739s 00:21:27.212 sys 0m14.203s 00:21:27.212 07:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:27.212 07:01:34 -- common/autotest_common.sh@10 -- # set +x 00:21:27.212 ************************************ 00:21:27.212 END TEST nvmf_perf_adq 00:21:27.212 ************************************ 00:21:27.212 07:01:34 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:27.212 07:01:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:27.212 07:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:27.212 07:01:34 -- common/autotest_common.sh@10 -- # set +x 00:21:27.212 ************************************ 00:21:27.212 START TEST nvmf_shutdown 00:21:27.212 ************************************ 00:21:27.212 07:01:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:27.212 * Looking for test storage... 00:21:27.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:27.471 07:01:34 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.471 07:01:34 -- nvmf/common.sh@7 -- # uname -s 00:21:27.471 07:01:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.471 07:01:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.471 07:01:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.471 07:01:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.471 07:01:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.471 07:01:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.471 07:01:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.471 07:01:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.471 07:01:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.471 07:01:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.471 07:01:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.471 07:01:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.471 07:01:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.471 07:01:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.471 07:01:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.471 07:01:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.471 07:01:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.471 07:01:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.471 07:01:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.471 07:01:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.471 07:01:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.471 07:01:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.471 07:01:34 -- paths/export.sh@5 -- # export PATH 00:21:27.471 07:01:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.471 07:01:34 -- nvmf/common.sh@46 -- # : 0 00:21:27.471 07:01:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:27.471 07:01:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:27.471 07:01:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:27.471 07:01:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.471 07:01:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.471 07:01:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:27.471 07:01:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:27.471 07:01:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:27.471 07:01:34 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:27.471 07:01:34 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:27.471 07:01:34 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:27.471 07:01:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:27.471 07:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:27.471 07:01:34 -- common/autotest_common.sh@10 -- # set +x 00:21:27.471 ************************************ 00:21:27.471 START TEST nvmf_shutdown_tc1 00:21:27.471 ************************************ 00:21:27.471 07:01:34 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:21:27.471 07:01:34 -- target/shutdown.sh@74 -- # starttarget 00:21:27.471 07:01:34 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:27.471 07:01:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:27.471 07:01:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.471 07:01:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:27.472 07:01:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:27.472 07:01:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:27.472 07:01:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.472 07:01:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.472 07:01:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.472 07:01:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:27.472 07:01:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:27.472 07:01:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:27.472 07:01:34 -- common/autotest_common.sh@10 -- # set +x 00:21:29.373 07:01:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:29.373 07:01:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:29.373 07:01:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:29.373 07:01:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:29.373 07:01:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:29.373 07:01:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:29.373 07:01:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:29.373 07:01:36 -- nvmf/common.sh@294 -- # net_devs=() 00:21:29.373 07:01:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:29.373 07:01:36 -- nvmf/common.sh@295 -- # e810=() 00:21:29.373 07:01:36 -- nvmf/common.sh@295 -- # local -ga e810 00:21:29.373 07:01:36 -- nvmf/common.sh@296 -- # x722=() 00:21:29.373 07:01:36 -- nvmf/common.sh@296 -- # local -ga x722 00:21:29.373 07:01:36 -- nvmf/common.sh@297 -- # mlx=() 00:21:29.373 07:01:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:29.373 07:01:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.373 07:01:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.373 07:01:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.373 07:01:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.373 07:01:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.373 07:01:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.373 07:01:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.373 07:01:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.373 07:01:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.373 07:01:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.373 07:01:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.373 07:01:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:29.373 07:01:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:29.373 07:01:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:29.373 07:01:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:29.373 07:01:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:29.373 07:01:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:29.373 07:01:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:29.373 07:01:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:29.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:29.374 07:01:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:29.374 07:01:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:29.374 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:29.374 07:01:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:29.374 07:01:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:29.374 07:01:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.374 07:01:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:29.374 07:01:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.374 07:01:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:29.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:29.374 07:01:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.374 07:01:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:29.374 07:01:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.374 07:01:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:29.374 07:01:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.374 07:01:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:29.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:29.374 07:01:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.374 07:01:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:29.374 07:01:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:29.374 07:01:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:29.374 07:01:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.374 07:01:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.374 07:01:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.374 07:01:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:29.374 07:01:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.374 07:01:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.374 07:01:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:29.374 07:01:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.374 07:01:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.374 07:01:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:29.374 07:01:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:29.374 07:01:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.374 07:01:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.374 07:01:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.374 07:01:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.374 07:01:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:29.374 07:01:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.374 07:01:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.374 07:01:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.374 07:01:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:29.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:21:29.374 00:21:29.374 --- 10.0.0.2 ping statistics --- 00:21:29.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.374 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:21:29.374 07:01:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:21:29.374 00:21:29.374 --- 10.0.0.1 ping statistics --- 00:21:29.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.374 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:21:29.374 07:01:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.374 07:01:36 -- nvmf/common.sh@410 -- # return 0 00:21:29.374 07:01:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:29.374 07:01:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.374 07:01:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:29.374 07:01:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.374 07:01:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:29.374 07:01:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:29.374 07:01:36 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:29.374 07:01:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:29.374 07:01:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:29.374 07:01:36 -- common/autotest_common.sh@10 -- # set +x 00:21:29.374 07:01:36 -- nvmf/common.sh@469 -- # nvmfpid=3090149 00:21:29.374 07:01:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:29.374 07:01:36 -- nvmf/common.sh@470 -- # waitforlisten 3090149 00:21:29.374 07:01:36 -- common/autotest_common.sh@819 -- # '[' -z 3090149 ']' 00:21:29.374 07:01:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.374 07:01:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:29.374 07:01:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.374 07:01:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:29.374 07:01:36 -- common/autotest_common.sh@10 -- # set +x 00:21:29.631 [2024-05-12 07:01:36.517829] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:29.631 [2024-05-12 07:01:36.517915] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.631 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.631 [2024-05-12 07:01:36.583058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.631 [2024-05-12 07:01:36.690574] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:29.631 [2024-05-12 07:01:36.690741] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.631 [2024-05-12 07:01:36.690769] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.631 [2024-05-12 07:01:36.690782] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.631 [2024-05-12 07:01:36.690874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.632 [2024-05-12 07:01:36.690946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.632 [2024-05-12 07:01:36.691001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:29.632 [2024-05-12 07:01:36.691004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.562 07:01:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:30.562 07:01:37 -- common/autotest_common.sh@852 -- # return 0 00:21:30.562 07:01:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:30.562 07:01:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:30.562 07:01:37 -- common/autotest_common.sh@10 -- # set +x 00:21:30.562 07:01:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.562 07:01:37 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:30.562 07:01:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:30.562 07:01:37 -- common/autotest_common.sh@10 -- # set +x 00:21:30.562 [2024-05-12 07:01:37.485183] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.562 07:01:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:30.562 07:01:37 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:30.562 07:01:37 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:30.562 07:01:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:30.562 07:01:37 -- common/autotest_common.sh@10 -- # set +x 00:21:30.562 07:01:37 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:30.562 07:01:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.562 07:01:37 -- target/shutdown.sh@28 -- # cat 00:21:30.562 07:01:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.562 07:01:37 -- target/shutdown.sh@28 -- # cat 00:21:30.562 07:01:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.562 07:01:37 -- target/shutdown.sh@28 -- # cat 00:21:30.562 07:01:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.562 07:01:37 -- target/shutdown.sh@28 -- # cat 00:21:30.562 07:01:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.562 07:01:37 -- target/shutdown.sh@28 -- # cat 00:21:30.562 07:01:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.562 07:01:37 -- target/shutdown.sh@28 -- # cat 00:21:30.562 07:01:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.562 07:01:37 -- target/shutdown.sh@28 -- # cat 00:21:30.562 07:01:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.562 07:01:37 -- target/shutdown.sh@28 -- # cat 00:21:30.562 07:01:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.562 07:01:37 -- target/shutdown.sh@28 -- # cat 00:21:30.562 07:01:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.562 07:01:37 -- target/shutdown.sh@28 -- # cat 00:21:30.562 07:01:37 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:30.562 07:01:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:30.562 07:01:37 -- common/autotest_common.sh@10 -- # set +x 00:21:30.562 Malloc1 00:21:30.562 [2024-05-12 07:01:37.560166] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.562 Malloc2 00:21:30.562 Malloc3 00:21:30.562 Malloc4 00:21:30.820 Malloc5 00:21:30.820 Malloc6 00:21:30.820 Malloc7 00:21:30.820 Malloc8 00:21:30.820 Malloc9 00:21:31.078 Malloc10 00:21:31.078 07:01:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:31.078 07:01:38 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:31.078 07:01:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:31.078 07:01:38 -- common/autotest_common.sh@10 -- # set +x 00:21:31.078 07:01:38 -- target/shutdown.sh@78 -- # perfpid=3090456 00:21:31.078 07:01:38 -- target/shutdown.sh@79 -- # waitforlisten 3090456 /var/tmp/bdevperf.sock 00:21:31.078 07:01:38 -- common/autotest_common.sh@819 -- # '[' -z 3090456 ']' 00:21:31.078 07:01:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.078 07:01:38 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:31.078 07:01:38 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:31.078 07:01:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:31.078 07:01:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.078 07:01:38 -- nvmf/common.sh@520 -- # config=() 00:21:31.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.078 07:01:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:31.078 07:01:38 -- nvmf/common.sh@520 -- # local subsystem config 00:21:31.078 07:01:38 -- common/autotest_common.sh@10 -- # set +x 00:21:31.078 07:01:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:31.078 07:01:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:31.078 { 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme$subsystem", 00:21:31.079 "trtype": "$TEST_TRANSPORT", 00:21:31.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "$NVMF_PORT", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.079 "hdgst": ${hdgst:-false}, 00:21:31.079 "ddgst": ${ddgst:-false} 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 } 00:21:31.079 EOF 00:21:31.079 )") 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # cat 00:21:31.079 07:01:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:31.079 { 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme$subsystem", 00:21:31.079 "trtype": "$TEST_TRANSPORT", 00:21:31.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "$NVMF_PORT", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.079 "hdgst": ${hdgst:-false}, 00:21:31.079 "ddgst": ${ddgst:-false} 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 } 00:21:31.079 EOF 00:21:31.079 )") 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # cat 00:21:31.079 07:01:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:31.079 { 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme$subsystem", 00:21:31.079 "trtype": "$TEST_TRANSPORT", 00:21:31.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "$NVMF_PORT", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.079 "hdgst": ${hdgst:-false}, 00:21:31.079 "ddgst": ${ddgst:-false} 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 } 00:21:31.079 EOF 00:21:31.079 )") 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # cat 00:21:31.079 07:01:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:31.079 { 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme$subsystem", 00:21:31.079 "trtype": "$TEST_TRANSPORT", 00:21:31.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "$NVMF_PORT", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.079 "hdgst": ${hdgst:-false}, 00:21:31.079 "ddgst": ${ddgst:-false} 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 } 00:21:31.079 EOF 00:21:31.079 )") 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # cat 00:21:31.079 07:01:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:31.079 { 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme$subsystem", 00:21:31.079 "trtype": "$TEST_TRANSPORT", 00:21:31.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "$NVMF_PORT", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.079 "hdgst": ${hdgst:-false}, 00:21:31.079 "ddgst": ${ddgst:-false} 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 } 00:21:31.079 EOF 00:21:31.079 )") 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # cat 00:21:31.079 07:01:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:31.079 { 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme$subsystem", 00:21:31.079 "trtype": "$TEST_TRANSPORT", 00:21:31.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "$NVMF_PORT", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.079 "hdgst": ${hdgst:-false}, 00:21:31.079 "ddgst": ${ddgst:-false} 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 } 00:21:31.079 EOF 00:21:31.079 )") 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # cat 00:21:31.079 07:01:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:31.079 { 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme$subsystem", 00:21:31.079 "trtype": "$TEST_TRANSPORT", 00:21:31.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "$NVMF_PORT", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.079 "hdgst": ${hdgst:-false}, 00:21:31.079 "ddgst": ${ddgst:-false} 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 } 00:21:31.079 EOF 00:21:31.079 )") 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # cat 00:21:31.079 07:01:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:31.079 { 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme$subsystem", 00:21:31.079 "trtype": "$TEST_TRANSPORT", 00:21:31.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "$NVMF_PORT", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.079 "hdgst": ${hdgst:-false}, 00:21:31.079 "ddgst": ${ddgst:-false} 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 } 00:21:31.079 EOF 00:21:31.079 )") 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # cat 00:21:31.079 07:01:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:31.079 { 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme$subsystem", 00:21:31.079 "trtype": "$TEST_TRANSPORT", 00:21:31.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "$NVMF_PORT", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.079 "hdgst": ${hdgst:-false}, 00:21:31.079 "ddgst": ${ddgst:-false} 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 } 00:21:31.079 EOF 00:21:31.079 )") 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # cat 00:21:31.079 07:01:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:31.079 { 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme$subsystem", 00:21:31.079 "trtype": "$TEST_TRANSPORT", 00:21:31.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "$NVMF_PORT", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.079 "hdgst": ${hdgst:-false}, 00:21:31.079 "ddgst": ${ddgst:-false} 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 } 00:21:31.079 EOF 00:21:31.079 )") 00:21:31.079 07:01:38 -- nvmf/common.sh@542 -- # cat 00:21:31.079 07:01:38 -- nvmf/common.sh@544 -- # jq . 00:21:31.079 07:01:38 -- nvmf/common.sh@545 -- # IFS=, 00:21:31.079 07:01:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme1", 00:21:31.079 "trtype": "tcp", 00:21:31.079 "traddr": "10.0.0.2", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "4420", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:31.079 "hdgst": false, 00:21:31.079 "ddgst": false 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 },{ 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme2", 00:21:31.079 "trtype": "tcp", 00:21:31.079 "traddr": "10.0.0.2", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "4420", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:31.079 "hdgst": false, 00:21:31.079 "ddgst": false 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 },{ 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme3", 00:21:31.079 "trtype": "tcp", 00:21:31.079 "traddr": "10.0.0.2", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "4420", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:31.079 "hdgst": false, 00:21:31.079 "ddgst": false 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 },{ 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme4", 00:21:31.079 "trtype": "tcp", 00:21:31.079 "traddr": "10.0.0.2", 00:21:31.079 "adrfam": "ipv4", 00:21:31.079 "trsvcid": "4420", 00:21:31.079 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:31.079 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:31.079 "hdgst": false, 00:21:31.079 "ddgst": false 00:21:31.079 }, 00:21:31.079 "method": "bdev_nvme_attach_controller" 00:21:31.079 },{ 00:21:31.079 "params": { 00:21:31.079 "name": "Nvme5", 00:21:31.079 "trtype": "tcp", 00:21:31.080 "traddr": "10.0.0.2", 00:21:31.080 "adrfam": "ipv4", 00:21:31.080 "trsvcid": "4420", 00:21:31.080 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:31.080 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:31.080 "hdgst": false, 00:21:31.080 "ddgst": false 00:21:31.080 }, 00:21:31.080 "method": "bdev_nvme_attach_controller" 00:21:31.080 },{ 00:21:31.080 "params": { 00:21:31.080 "name": "Nvme6", 00:21:31.080 "trtype": "tcp", 00:21:31.080 "traddr": "10.0.0.2", 00:21:31.080 "adrfam": "ipv4", 00:21:31.080 "trsvcid": "4420", 00:21:31.080 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:31.080 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:31.080 "hdgst": false, 00:21:31.080 "ddgst": false 00:21:31.080 }, 00:21:31.080 "method": "bdev_nvme_attach_controller" 00:21:31.080 },{ 00:21:31.080 "params": { 00:21:31.080 "name": "Nvme7", 00:21:31.080 "trtype": "tcp", 00:21:31.080 "traddr": "10.0.0.2", 00:21:31.080 "adrfam": "ipv4", 00:21:31.080 "trsvcid": "4420", 00:21:31.080 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:31.080 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:31.080 "hdgst": false, 00:21:31.080 "ddgst": false 00:21:31.080 }, 00:21:31.080 "method": "bdev_nvme_attach_controller" 00:21:31.080 },{ 00:21:31.080 "params": { 00:21:31.080 "name": "Nvme8", 00:21:31.080 "trtype": "tcp", 00:21:31.080 "traddr": "10.0.0.2", 00:21:31.080 "adrfam": "ipv4", 00:21:31.080 "trsvcid": "4420", 00:21:31.080 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:31.080 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:31.080 "hdgst": false, 00:21:31.080 "ddgst": false 00:21:31.080 }, 00:21:31.080 "method": "bdev_nvme_attach_controller" 00:21:31.080 },{ 00:21:31.080 "params": { 00:21:31.080 "name": "Nvme9", 00:21:31.080 "trtype": "tcp", 00:21:31.080 "traddr": "10.0.0.2", 00:21:31.080 "adrfam": "ipv4", 00:21:31.080 "trsvcid": "4420", 00:21:31.080 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:31.080 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:31.080 "hdgst": false, 00:21:31.080 "ddgst": false 00:21:31.080 }, 00:21:31.080 "method": "bdev_nvme_attach_controller" 00:21:31.080 },{ 00:21:31.080 "params": { 00:21:31.080 "name": "Nvme10", 00:21:31.080 "trtype": "tcp", 00:21:31.080 "traddr": "10.0.0.2", 00:21:31.080 "adrfam": "ipv4", 00:21:31.080 "trsvcid": "4420", 00:21:31.080 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:31.080 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:31.080 "hdgst": false, 00:21:31.080 "ddgst": false 00:21:31.080 }, 00:21:31.080 "method": "bdev_nvme_attach_controller" 00:21:31.080 }' 00:21:31.080 [2024-05-12 07:01:38.069313] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:31.080 [2024-05-12 07:01:38.069395] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:31.080 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.080 [2024-05-12 07:01:38.131955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.338 [2024-05-12 07:01:38.240641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.704 07:01:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:32.704 07:01:39 -- common/autotest_common.sh@852 -- # return 0 00:21:32.704 07:01:39 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:32.704 07:01:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.704 07:01:39 -- common/autotest_common.sh@10 -- # set +x 00:21:32.704 07:01:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.704 07:01:39 -- target/shutdown.sh@83 -- # kill -9 3090456 00:21:32.704 07:01:39 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:32.704 07:01:39 -- target/shutdown.sh@87 -- # sleep 1 00:21:34.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3090456 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:34.075 07:01:40 -- target/shutdown.sh@88 -- # kill -0 3090149 00:21:34.075 07:01:40 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:34.075 07:01:40 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:34.075 07:01:40 -- nvmf/common.sh@520 -- # config=() 00:21:34.075 07:01:40 -- nvmf/common.sh@520 -- # local subsystem config 00:21:34.075 07:01:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:34.075 07:01:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:34.075 { 00:21:34.075 "params": { 00:21:34.075 "name": "Nvme$subsystem", 00:21:34.076 "trtype": "$TEST_TRANSPORT", 00:21:34.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "$NVMF_PORT", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.076 "hdgst": ${hdgst:-false}, 00:21:34.076 "ddgst": ${ddgst:-false} 00:21:34.076 }, 00:21:34.076 "method": "bdev_nvme_attach_controller" 00:21:34.076 } 00:21:34.076 EOF 00:21:34.076 )") 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # cat 00:21:34.076 07:01:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:34.076 { 00:21:34.076 "params": { 00:21:34.076 "name": "Nvme$subsystem", 00:21:34.076 "trtype": "$TEST_TRANSPORT", 00:21:34.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "$NVMF_PORT", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.076 "hdgst": ${hdgst:-false}, 00:21:34.076 "ddgst": ${ddgst:-false} 00:21:34.076 }, 00:21:34.076 "method": "bdev_nvme_attach_controller" 00:21:34.076 } 00:21:34.076 EOF 00:21:34.076 )") 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # cat 00:21:34.076 07:01:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:34.076 { 00:21:34.076 "params": { 00:21:34.076 "name": "Nvme$subsystem", 00:21:34.076 "trtype": "$TEST_TRANSPORT", 00:21:34.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "$NVMF_PORT", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.076 "hdgst": ${hdgst:-false}, 00:21:34.076 "ddgst": ${ddgst:-false} 00:21:34.076 }, 00:21:34.076 "method": "bdev_nvme_attach_controller" 00:21:34.076 } 00:21:34.076 EOF 00:21:34.076 )") 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # cat 00:21:34.076 07:01:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:34.076 { 00:21:34.076 "params": { 00:21:34.076 "name": "Nvme$subsystem", 00:21:34.076 "trtype": "$TEST_TRANSPORT", 00:21:34.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "$NVMF_PORT", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.076 "hdgst": ${hdgst:-false}, 00:21:34.076 "ddgst": ${ddgst:-false} 00:21:34.076 }, 00:21:34.076 "method": "bdev_nvme_attach_controller" 00:21:34.076 } 00:21:34.076 EOF 00:21:34.076 )") 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # cat 00:21:34.076 07:01:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:34.076 { 00:21:34.076 "params": { 00:21:34.076 "name": "Nvme$subsystem", 00:21:34.076 "trtype": "$TEST_TRANSPORT", 00:21:34.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "$NVMF_PORT", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.076 "hdgst": ${hdgst:-false}, 00:21:34.076 "ddgst": ${ddgst:-false} 00:21:34.076 }, 00:21:34.076 "method": "bdev_nvme_attach_controller" 00:21:34.076 } 00:21:34.076 EOF 00:21:34.076 )") 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # cat 00:21:34.076 07:01:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:34.076 { 00:21:34.076 "params": { 00:21:34.076 "name": "Nvme$subsystem", 00:21:34.076 "trtype": "$TEST_TRANSPORT", 00:21:34.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "$NVMF_PORT", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.076 "hdgst": ${hdgst:-false}, 00:21:34.076 "ddgst": ${ddgst:-false} 00:21:34.076 }, 00:21:34.076 "method": "bdev_nvme_attach_controller" 00:21:34.076 } 00:21:34.076 EOF 00:21:34.076 )") 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # cat 00:21:34.076 07:01:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:34.076 { 00:21:34.076 "params": { 00:21:34.076 "name": "Nvme$subsystem", 00:21:34.076 "trtype": "$TEST_TRANSPORT", 00:21:34.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "$NVMF_PORT", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.076 "hdgst": ${hdgst:-false}, 00:21:34.076 "ddgst": ${ddgst:-false} 00:21:34.076 }, 00:21:34.076 "method": "bdev_nvme_attach_controller" 00:21:34.076 } 00:21:34.076 EOF 00:21:34.076 )") 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # cat 00:21:34.076 07:01:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:34.076 { 00:21:34.076 "params": { 00:21:34.076 "name": "Nvme$subsystem", 00:21:34.076 "trtype": "$TEST_TRANSPORT", 00:21:34.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "$NVMF_PORT", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.076 "hdgst": ${hdgst:-false}, 00:21:34.076 "ddgst": ${ddgst:-false} 00:21:34.076 }, 00:21:34.076 "method": "bdev_nvme_attach_controller" 00:21:34.076 } 00:21:34.076 EOF 00:21:34.076 )") 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # cat 00:21:34.076 07:01:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:34.076 { 00:21:34.076 "params": { 00:21:34.076 "name": "Nvme$subsystem", 00:21:34.076 "trtype": "$TEST_TRANSPORT", 00:21:34.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "$NVMF_PORT", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.076 "hdgst": ${hdgst:-false}, 00:21:34.076 "ddgst": ${ddgst:-false} 00:21:34.076 }, 00:21:34.076 "method": "bdev_nvme_attach_controller" 00:21:34.076 } 00:21:34.076 EOF 00:21:34.076 )") 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # cat 00:21:34.076 07:01:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:34.076 { 00:21:34.076 "params": { 00:21:34.076 "name": "Nvme$subsystem", 00:21:34.076 "trtype": "$TEST_TRANSPORT", 00:21:34.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "$NVMF_PORT", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.076 "hdgst": ${hdgst:-false}, 00:21:34.076 "ddgst": ${ddgst:-false} 00:21:34.076 }, 00:21:34.076 "method": "bdev_nvme_attach_controller" 00:21:34.076 } 00:21:34.076 EOF 00:21:34.076 )") 00:21:34.076 07:01:40 -- nvmf/common.sh@542 -- # cat 00:21:34.076 07:01:40 -- nvmf/common.sh@544 -- # jq . 00:21:34.076 07:01:40 -- nvmf/common.sh@545 -- # IFS=, 00:21:34.076 07:01:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:34.076 "params": { 00:21:34.076 "name": "Nvme1", 00:21:34.076 "trtype": "tcp", 00:21:34.076 "traddr": "10.0.0.2", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "4420", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.076 "hdgst": false, 00:21:34.076 "ddgst": false 00:21:34.076 }, 00:21:34.076 "method": "bdev_nvme_attach_controller" 00:21:34.076 },{ 00:21:34.076 "params": { 00:21:34.076 "name": "Nvme2", 00:21:34.076 "trtype": "tcp", 00:21:34.076 "traddr": "10.0.0.2", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "4420", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:34.076 "hdgst": false, 00:21:34.076 "ddgst": false 00:21:34.076 }, 00:21:34.076 "method": "bdev_nvme_attach_controller" 00:21:34.076 },{ 00:21:34.076 "params": { 00:21:34.076 "name": "Nvme3", 00:21:34.076 "trtype": "tcp", 00:21:34.076 "traddr": "10.0.0.2", 00:21:34.076 "adrfam": "ipv4", 00:21:34.076 "trsvcid": "4420", 00:21:34.076 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:34.076 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:34.076 "hdgst": false, 00:21:34.077 "ddgst": false 00:21:34.077 }, 00:21:34.077 "method": "bdev_nvme_attach_controller" 00:21:34.077 },{ 00:21:34.077 "params": { 00:21:34.077 "name": "Nvme4", 00:21:34.077 "trtype": "tcp", 00:21:34.077 "traddr": "10.0.0.2", 00:21:34.077 "adrfam": "ipv4", 00:21:34.077 "trsvcid": "4420", 00:21:34.077 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:34.077 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:34.077 "hdgst": false, 00:21:34.077 "ddgst": false 00:21:34.077 }, 00:21:34.077 "method": "bdev_nvme_attach_controller" 00:21:34.077 },{ 00:21:34.077 "params": { 00:21:34.077 "name": "Nvme5", 00:21:34.077 "trtype": "tcp", 00:21:34.077 "traddr": "10.0.0.2", 00:21:34.077 "adrfam": "ipv4", 00:21:34.077 "trsvcid": "4420", 00:21:34.077 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:34.077 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:34.077 "hdgst": false, 00:21:34.077 "ddgst": false 00:21:34.077 }, 00:21:34.077 "method": "bdev_nvme_attach_controller" 00:21:34.077 },{ 00:21:34.077 "params": { 00:21:34.077 "name": "Nvme6", 00:21:34.077 "trtype": "tcp", 00:21:34.077 "traddr": "10.0.0.2", 00:21:34.077 "adrfam": "ipv4", 00:21:34.077 "trsvcid": "4420", 00:21:34.077 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:34.077 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:34.077 "hdgst": false, 00:21:34.077 "ddgst": false 00:21:34.077 }, 00:21:34.077 "method": "bdev_nvme_attach_controller" 00:21:34.077 },{ 00:21:34.077 "params": { 00:21:34.077 "name": "Nvme7", 00:21:34.077 "trtype": "tcp", 00:21:34.077 "traddr": "10.0.0.2", 00:21:34.077 "adrfam": "ipv4", 00:21:34.077 "trsvcid": "4420", 00:21:34.077 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:34.077 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:34.077 "hdgst": false, 00:21:34.077 "ddgst": false 00:21:34.077 }, 00:21:34.077 "method": "bdev_nvme_attach_controller" 00:21:34.077 },{ 00:21:34.077 "params": { 00:21:34.077 "name": "Nvme8", 00:21:34.077 "trtype": "tcp", 00:21:34.077 "traddr": "10.0.0.2", 00:21:34.077 "adrfam": "ipv4", 00:21:34.077 "trsvcid": "4420", 00:21:34.077 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:34.077 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:34.077 "hdgst": false, 00:21:34.077 "ddgst": false 00:21:34.077 }, 00:21:34.077 "method": "bdev_nvme_attach_controller" 00:21:34.077 },{ 00:21:34.077 "params": { 00:21:34.077 "name": "Nvme9", 00:21:34.077 "trtype": "tcp", 00:21:34.077 "traddr": "10.0.0.2", 00:21:34.077 "adrfam": "ipv4", 00:21:34.077 "trsvcid": "4420", 00:21:34.077 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:34.077 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:34.077 "hdgst": false, 00:21:34.077 "ddgst": false 00:21:34.077 }, 00:21:34.077 "method": "bdev_nvme_attach_controller" 00:21:34.077 },{ 00:21:34.077 "params": { 00:21:34.077 "name": "Nvme10", 00:21:34.077 "trtype": "tcp", 00:21:34.077 "traddr": "10.0.0.2", 00:21:34.077 "adrfam": "ipv4", 00:21:34.077 "trsvcid": "4420", 00:21:34.077 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:34.077 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:34.077 "hdgst": false, 00:21:34.077 "ddgst": false 00:21:34.077 }, 00:21:34.077 "method": "bdev_nvme_attach_controller" 00:21:34.077 }' 00:21:34.077 [2024-05-12 07:01:40.804530] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:34.077 [2024-05-12 07:01:40.804610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090772 ] 00:21:34.077 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.077 [2024-05-12 07:01:40.868857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.077 [2024-05-12 07:01:40.976424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.450 Running I/O for 1 seconds... 00:21:36.424 00:21:36.424 Latency(us) 00:21:36.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.424 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.424 Verification LBA range: start 0x0 length 0x400 00:21:36.424 Nvme1n1 : 1.09 372.97 23.31 0.00 0.00 167878.96 10922.67 177869.56 00:21:36.424 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.424 Verification LBA range: start 0x0 length 0x400 00:21:36.424 Nvme2n1 : 1.13 382.87 23.93 0.00 0.00 157839.75 17379.18 138256.69 00:21:36.424 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.424 Verification LBA range: start 0x0 length 0x400 00:21:36.424 Nvme3n1 : 1.07 403.97 25.25 0.00 0.00 153855.69 13786.83 119615.34 00:21:36.424 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.424 Verification LBA range: start 0x0 length 0x400 00:21:36.424 Nvme4n1 : 1.08 372.74 23.30 0.00 0.00 164671.99 6990.51 146800.64 00:21:36.424 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.424 Verification LBA range: start 0x0 length 0x400 00:21:36.424 Nvme5n1 : 1.10 393.58 24.60 0.00 0.00 155888.00 2597.17 138256.69 00:21:36.424 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.425 Verification LBA range: start 0x0 length 0x400 00:21:36.425 Nvme6n1 : 1.08 401.85 25.12 0.00 0.00 151004.94 17087.91 118838.61 00:21:36.425 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.425 Verification LBA range: start 0x0 length 0x400 00:21:36.425 Nvme7n1 : 1.09 402.69 25.17 0.00 0.00 149258.38 21359.88 118061.89 00:21:36.425 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.425 Verification LBA range: start 0x0 length 0x400 00:21:36.425 Nvme8n1 : 1.10 394.25 24.64 0.00 0.00 152581.38 11019.76 134373.07 00:21:36.425 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.425 Verification LBA range: start 0x0 length 0x400 00:21:36.425 Nvme9n1 : 1.11 400.63 25.04 0.00 0.00 149200.12 8738.13 125052.40 00:21:36.425 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.425 Verification LBA range: start 0x0 length 0x400 00:21:36.425 Nvme10n1 : 1.10 397.88 24.87 0.00 0.00 148685.32 4271.98 126605.84 00:21:36.425 =================================================================================================================== 00:21:36.425 Total : 3923.42 245.21 0.00 0.00 154904.15 2597.17 177869.56 00:21:36.684 07:01:43 -- target/shutdown.sh@93 -- # stoptarget 00:21:36.684 07:01:43 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:36.684 07:01:43 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:36.684 07:01:43 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:36.684 07:01:43 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:36.684 07:01:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:36.684 07:01:43 -- nvmf/common.sh@116 -- # sync 00:21:36.684 07:01:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:36.684 07:01:43 -- nvmf/common.sh@119 -- # set +e 00:21:36.684 07:01:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:36.684 07:01:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:36.684 rmmod nvme_tcp 00:21:36.684 rmmod nvme_fabrics 00:21:36.684 rmmod nvme_keyring 00:21:36.684 07:01:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:36.684 07:01:43 -- nvmf/common.sh@123 -- # set -e 00:21:36.684 07:01:43 -- nvmf/common.sh@124 -- # return 0 00:21:36.684 07:01:43 -- nvmf/common.sh@477 -- # '[' -n 3090149 ']' 00:21:36.684 07:01:43 -- nvmf/common.sh@478 -- # killprocess 3090149 00:21:36.684 07:01:43 -- common/autotest_common.sh@926 -- # '[' -z 3090149 ']' 00:21:36.684 07:01:43 -- common/autotest_common.sh@930 -- # kill -0 3090149 00:21:36.684 07:01:43 -- common/autotest_common.sh@931 -- # uname 00:21:36.684 07:01:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:36.684 07:01:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3090149 00:21:36.684 07:01:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:36.684 07:01:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:36.684 07:01:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3090149' 00:21:36.684 killing process with pid 3090149 00:21:36.684 07:01:43 -- common/autotest_common.sh@945 -- # kill 3090149 00:21:36.684 07:01:43 -- common/autotest_common.sh@950 -- # wait 3090149 00:21:37.250 07:01:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:37.250 07:01:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:37.250 07:01:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:37.250 07:01:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.250 07:01:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:37.250 07:01:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.250 07:01:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.250 07:01:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.786 07:01:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:39.786 00:21:39.786 real 0m11.971s 00:21:39.786 user 0m34.603s 00:21:39.786 sys 0m3.277s 00:21:39.786 07:01:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.786 07:01:46 -- common/autotest_common.sh@10 -- # set +x 00:21:39.786 ************************************ 00:21:39.786 END TEST nvmf_shutdown_tc1 00:21:39.786 ************************************ 00:21:39.786 07:01:46 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:39.786 07:01:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:39.786 07:01:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:39.786 07:01:46 -- common/autotest_common.sh@10 -- # set +x 00:21:39.786 ************************************ 00:21:39.786 START TEST nvmf_shutdown_tc2 00:21:39.786 ************************************ 00:21:39.786 07:01:46 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:21:39.786 07:01:46 -- target/shutdown.sh@98 -- # starttarget 00:21:39.786 07:01:46 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:39.786 07:01:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:39.786 07:01:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.786 07:01:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:39.786 07:01:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:39.786 07:01:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:39.786 07:01:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.786 07:01:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.786 07:01:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.786 07:01:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:39.786 07:01:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:39.786 07:01:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:39.786 07:01:46 -- common/autotest_common.sh@10 -- # set +x 00:21:39.786 07:01:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:39.786 07:01:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:39.786 07:01:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:39.786 07:01:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:39.786 07:01:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:39.786 07:01:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:39.786 07:01:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:39.786 07:01:46 -- nvmf/common.sh@294 -- # net_devs=() 00:21:39.786 07:01:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:39.786 07:01:46 -- nvmf/common.sh@295 -- # e810=() 00:21:39.786 07:01:46 -- nvmf/common.sh@295 -- # local -ga e810 00:21:39.786 07:01:46 -- nvmf/common.sh@296 -- # x722=() 00:21:39.786 07:01:46 -- nvmf/common.sh@296 -- # local -ga x722 00:21:39.786 07:01:46 -- nvmf/common.sh@297 -- # mlx=() 00:21:39.786 07:01:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:39.786 07:01:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.786 07:01:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.786 07:01:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.786 07:01:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.786 07:01:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.786 07:01:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.786 07:01:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.786 07:01:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.786 07:01:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.787 07:01:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.787 07:01:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.787 07:01:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:39.787 07:01:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:39.787 07:01:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:39.787 07:01:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:39.787 07:01:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:39.787 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:39.787 07:01:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:39.787 07:01:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:39.787 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:39.787 07:01:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:39.787 07:01:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:39.787 07:01:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.787 07:01:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:39.787 07:01:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.787 07:01:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:39.787 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:39.787 07:01:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.787 07:01:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:39.787 07:01:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.787 07:01:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:39.787 07:01:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.787 07:01:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:39.787 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:39.787 07:01:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.787 07:01:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:39.787 07:01:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:39.787 07:01:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:39.787 07:01:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.787 07:01:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.787 07:01:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.787 07:01:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:39.787 07:01:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.787 07:01:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.787 07:01:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:39.787 07:01:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.787 07:01:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.787 07:01:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:39.787 07:01:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:39.787 07:01:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.787 07:01:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.787 07:01:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.787 07:01:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.787 07:01:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:39.787 07:01:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.787 07:01:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.787 07:01:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.787 07:01:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:39.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:21:39.787 00:21:39.787 --- 10.0.0.2 ping statistics --- 00:21:39.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.787 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:21:39.787 07:01:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:21:39.787 00:21:39.787 --- 10.0.0.1 ping statistics --- 00:21:39.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.787 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:21:39.787 07:01:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.787 07:01:46 -- nvmf/common.sh@410 -- # return 0 00:21:39.787 07:01:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:39.787 07:01:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.787 07:01:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:39.787 07:01:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.787 07:01:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:39.787 07:01:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:39.787 07:01:46 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:39.787 07:01:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:39.787 07:01:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:39.787 07:01:46 -- common/autotest_common.sh@10 -- # set +x 00:21:39.787 07:01:46 -- nvmf/common.sh@469 -- # nvmfpid=3091555 00:21:39.787 07:01:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:39.787 07:01:46 -- nvmf/common.sh@470 -- # waitforlisten 3091555 00:21:39.787 07:01:46 -- common/autotest_common.sh@819 -- # '[' -z 3091555 ']' 00:21:39.787 07:01:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.787 07:01:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:39.787 07:01:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.787 07:01:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:39.787 07:01:46 -- common/autotest_common.sh@10 -- # set +x 00:21:39.787 [2024-05-12 07:01:46.577770] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:39.787 [2024-05-12 07:01:46.577859] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.787 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.787 [2024-05-12 07:01:46.642097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.787 [2024-05-12 07:01:46.753879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:39.787 [2024-05-12 07:01:46.754043] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.787 [2024-05-12 07:01:46.754061] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.787 [2024-05-12 07:01:46.754074] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.787 [2024-05-12 07:01:46.754126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.787 [2024-05-12 07:01:46.754191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.787 [2024-05-12 07:01:46.754216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:39.787 [2024-05-12 07:01:46.754218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.718 07:01:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:40.718 07:01:47 -- common/autotest_common.sh@852 -- # return 0 00:21:40.718 07:01:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:40.718 07:01:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:40.718 07:01:47 -- common/autotest_common.sh@10 -- # set +x 00:21:40.718 07:01:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.718 07:01:47 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.718 07:01:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.718 07:01:47 -- common/autotest_common.sh@10 -- # set +x 00:21:40.718 [2024-05-12 07:01:47.550222] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.718 07:01:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.718 07:01:47 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:40.718 07:01:47 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:40.718 07:01:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:40.718 07:01:47 -- common/autotest_common.sh@10 -- # set +x 00:21:40.718 07:01:47 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:40.718 07:01:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.718 07:01:47 -- target/shutdown.sh@28 -- # cat 00:21:40.718 07:01:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.718 07:01:47 -- target/shutdown.sh@28 -- # cat 00:21:40.718 07:01:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.718 07:01:47 -- target/shutdown.sh@28 -- # cat 00:21:40.718 07:01:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.718 07:01:47 -- target/shutdown.sh@28 -- # cat 00:21:40.718 07:01:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.718 07:01:47 -- target/shutdown.sh@28 -- # cat 00:21:40.718 07:01:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.718 07:01:47 -- target/shutdown.sh@28 -- # cat 00:21:40.718 07:01:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.718 07:01:47 -- target/shutdown.sh@28 -- # cat 00:21:40.718 07:01:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.718 07:01:47 -- target/shutdown.sh@28 -- # cat 00:21:40.718 07:01:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.718 07:01:47 -- target/shutdown.sh@28 -- # cat 00:21:40.718 07:01:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.718 07:01:47 -- target/shutdown.sh@28 -- # cat 00:21:40.718 07:01:47 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:40.718 07:01:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.718 07:01:47 -- common/autotest_common.sh@10 -- # set +x 00:21:40.718 Malloc1 00:21:40.718 [2024-05-12 07:01:47.625210] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.718 Malloc2 00:21:40.718 Malloc3 00:21:40.718 Malloc4 00:21:40.718 Malloc5 00:21:40.718 Malloc6 00:21:40.974 Malloc7 00:21:40.974 Malloc8 00:21:40.974 Malloc9 00:21:40.974 Malloc10 00:21:40.974 07:01:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.974 07:01:48 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:40.974 07:01:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:40.974 07:01:48 -- common/autotest_common.sh@10 -- # set +x 00:21:40.974 07:01:48 -- target/shutdown.sh@102 -- # perfpid=3091868 00:21:40.974 07:01:48 -- target/shutdown.sh@103 -- # waitforlisten 3091868 /var/tmp/bdevperf.sock 00:21:40.974 07:01:48 -- common/autotest_common.sh@819 -- # '[' -z 3091868 ']' 00:21:40.974 07:01:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.974 07:01:48 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:40.974 07:01:48 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:40.974 07:01:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:40.974 07:01:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.974 07:01:48 -- nvmf/common.sh@520 -- # config=() 00:21:40.974 07:01:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:40.974 07:01:48 -- nvmf/common.sh@520 -- # local subsystem config 00:21:40.974 07:01:48 -- common/autotest_common.sh@10 -- # set +x 00:21:40.974 07:01:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:40.974 07:01:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:40.974 { 00:21:40.974 "params": { 00:21:40.974 "name": "Nvme$subsystem", 00:21:40.974 "trtype": "$TEST_TRANSPORT", 00:21:40.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.974 "adrfam": "ipv4", 00:21:40.974 "trsvcid": "$NVMF_PORT", 00:21:40.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.974 "hdgst": ${hdgst:-false}, 00:21:40.974 "ddgst": ${ddgst:-false} 00:21:40.974 }, 00:21:40.975 "method": "bdev_nvme_attach_controller" 00:21:40.975 } 00:21:40.975 EOF 00:21:40.975 )") 00:21:40.975 07:01:48 -- nvmf/common.sh@542 -- # cat 00:21:41.233 07:01:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.233 { 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme$subsystem", 00:21:41.233 "trtype": "$TEST_TRANSPORT", 00:21:41.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "$NVMF_PORT", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.233 "hdgst": ${hdgst:-false}, 00:21:41.233 "ddgst": ${ddgst:-false} 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.233 } 00:21:41.233 EOF 00:21:41.233 )") 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # cat 00:21:41.233 07:01:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.233 { 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme$subsystem", 00:21:41.233 "trtype": "$TEST_TRANSPORT", 00:21:41.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "$NVMF_PORT", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.233 "hdgst": ${hdgst:-false}, 00:21:41.233 "ddgst": ${ddgst:-false} 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.233 } 00:21:41.233 EOF 00:21:41.233 )") 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # cat 00:21:41.233 07:01:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.233 { 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme$subsystem", 00:21:41.233 "trtype": "$TEST_TRANSPORT", 00:21:41.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "$NVMF_PORT", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.233 "hdgst": ${hdgst:-false}, 00:21:41.233 "ddgst": ${ddgst:-false} 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.233 } 00:21:41.233 EOF 00:21:41.233 )") 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # cat 00:21:41.233 07:01:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.233 { 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme$subsystem", 00:21:41.233 "trtype": "$TEST_TRANSPORT", 00:21:41.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "$NVMF_PORT", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.233 "hdgst": ${hdgst:-false}, 00:21:41.233 "ddgst": ${ddgst:-false} 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.233 } 00:21:41.233 EOF 00:21:41.233 )") 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # cat 00:21:41.233 07:01:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.233 { 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme$subsystem", 00:21:41.233 "trtype": "$TEST_TRANSPORT", 00:21:41.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "$NVMF_PORT", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.233 "hdgst": ${hdgst:-false}, 00:21:41.233 "ddgst": ${ddgst:-false} 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.233 } 00:21:41.233 EOF 00:21:41.233 )") 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # cat 00:21:41.233 07:01:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.233 { 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme$subsystem", 00:21:41.233 "trtype": "$TEST_TRANSPORT", 00:21:41.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "$NVMF_PORT", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.233 "hdgst": ${hdgst:-false}, 00:21:41.233 "ddgst": ${ddgst:-false} 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.233 } 00:21:41.233 EOF 00:21:41.233 )") 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # cat 00:21:41.233 07:01:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.233 { 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme$subsystem", 00:21:41.233 "trtype": "$TEST_TRANSPORT", 00:21:41.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "$NVMF_PORT", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.233 "hdgst": ${hdgst:-false}, 00:21:41.233 "ddgst": ${ddgst:-false} 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.233 } 00:21:41.233 EOF 00:21:41.233 )") 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # cat 00:21:41.233 07:01:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.233 { 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme$subsystem", 00:21:41.233 "trtype": "$TEST_TRANSPORT", 00:21:41.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "$NVMF_PORT", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.233 "hdgst": ${hdgst:-false}, 00:21:41.233 "ddgst": ${ddgst:-false} 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.233 } 00:21:41.233 EOF 00:21:41.233 )") 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # cat 00:21:41.233 07:01:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.233 { 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme$subsystem", 00:21:41.233 "trtype": "$TEST_TRANSPORT", 00:21:41.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "$NVMF_PORT", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.233 "hdgst": ${hdgst:-false}, 00:21:41.233 "ddgst": ${ddgst:-false} 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.233 } 00:21:41.233 EOF 00:21:41.233 )") 00:21:41.233 07:01:48 -- nvmf/common.sh@542 -- # cat 00:21:41.233 07:01:48 -- nvmf/common.sh@544 -- # jq . 00:21:41.233 07:01:48 -- nvmf/common.sh@545 -- # IFS=, 00:21:41.233 07:01:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme1", 00:21:41.233 "trtype": "tcp", 00:21:41.233 "traddr": "10.0.0.2", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "4420", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.233 "hdgst": false, 00:21:41.233 "ddgst": false 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.233 },{ 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme2", 00:21:41.233 "trtype": "tcp", 00:21:41.233 "traddr": "10.0.0.2", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "4420", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:41.233 "hdgst": false, 00:21:41.233 "ddgst": false 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.233 },{ 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme3", 00:21:41.233 "trtype": "tcp", 00:21:41.233 "traddr": "10.0.0.2", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "4420", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:41.233 "hdgst": false, 00:21:41.233 "ddgst": false 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.233 },{ 00:21:41.233 "params": { 00:21:41.233 "name": "Nvme4", 00:21:41.233 "trtype": "tcp", 00:21:41.233 "traddr": "10.0.0.2", 00:21:41.233 "adrfam": "ipv4", 00:21:41.233 "trsvcid": "4420", 00:21:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:41.233 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:41.233 "hdgst": false, 00:21:41.233 "ddgst": false 00:21:41.233 }, 00:21:41.233 "method": "bdev_nvme_attach_controller" 00:21:41.234 },{ 00:21:41.234 "params": { 00:21:41.234 "name": "Nvme5", 00:21:41.234 "trtype": "tcp", 00:21:41.234 "traddr": "10.0.0.2", 00:21:41.234 "adrfam": "ipv4", 00:21:41.234 "trsvcid": "4420", 00:21:41.234 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:41.234 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:41.234 "hdgst": false, 00:21:41.234 "ddgst": false 00:21:41.234 }, 00:21:41.234 "method": "bdev_nvme_attach_controller" 00:21:41.234 },{ 00:21:41.234 "params": { 00:21:41.234 "name": "Nvme6", 00:21:41.234 "trtype": "tcp", 00:21:41.234 "traddr": "10.0.0.2", 00:21:41.234 "adrfam": "ipv4", 00:21:41.234 "trsvcid": "4420", 00:21:41.234 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:41.234 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:41.234 "hdgst": false, 00:21:41.234 "ddgst": false 00:21:41.234 }, 00:21:41.234 "method": "bdev_nvme_attach_controller" 00:21:41.234 },{ 00:21:41.234 "params": { 00:21:41.234 "name": "Nvme7", 00:21:41.234 "trtype": "tcp", 00:21:41.234 "traddr": "10.0.0.2", 00:21:41.234 "adrfam": "ipv4", 00:21:41.234 "trsvcid": "4420", 00:21:41.234 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:41.234 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:41.234 "hdgst": false, 00:21:41.234 "ddgst": false 00:21:41.234 }, 00:21:41.234 "method": "bdev_nvme_attach_controller" 00:21:41.234 },{ 00:21:41.234 "params": { 00:21:41.234 "name": "Nvme8", 00:21:41.234 "trtype": "tcp", 00:21:41.234 "traddr": "10.0.0.2", 00:21:41.234 "adrfam": "ipv4", 00:21:41.234 "trsvcid": "4420", 00:21:41.234 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:41.234 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:41.234 "hdgst": false, 00:21:41.234 "ddgst": false 00:21:41.234 }, 00:21:41.234 "method": "bdev_nvme_attach_controller" 00:21:41.234 },{ 00:21:41.234 "params": { 00:21:41.234 "name": "Nvme9", 00:21:41.234 "trtype": "tcp", 00:21:41.234 "traddr": "10.0.0.2", 00:21:41.234 "adrfam": "ipv4", 00:21:41.234 "trsvcid": "4420", 00:21:41.234 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:41.234 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:41.234 "hdgst": false, 00:21:41.234 "ddgst": false 00:21:41.234 }, 00:21:41.234 "method": "bdev_nvme_attach_controller" 00:21:41.234 },{ 00:21:41.234 "params": { 00:21:41.234 "name": "Nvme10", 00:21:41.234 "trtype": "tcp", 00:21:41.234 "traddr": "10.0.0.2", 00:21:41.234 "adrfam": "ipv4", 00:21:41.234 "trsvcid": "4420", 00:21:41.234 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:41.234 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:41.234 "hdgst": false, 00:21:41.234 "ddgst": false 00:21:41.234 }, 00:21:41.234 "method": "bdev_nvme_attach_controller" 00:21:41.234 }' 00:21:41.234 [2024-05-12 07:01:48.139667] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:41.234 [2024-05-12 07:01:48.139778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091868 ] 00:21:41.234 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.234 [2024-05-12 07:01:48.202603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.234 [2024-05-12 07:01:48.310197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.127 Running I/O for 10 seconds... 00:21:43.127 07:01:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:43.127 07:01:49 -- common/autotest_common.sh@852 -- # return 0 00:21:43.127 07:01:49 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:43.127 07:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.127 07:01:49 -- common/autotest_common.sh@10 -- # set +x 00:21:43.127 07:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.127 07:01:49 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:43.127 07:01:49 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:43.127 07:01:49 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:43.127 07:01:49 -- target/shutdown.sh@57 -- # local ret=1 00:21:43.127 07:01:49 -- target/shutdown.sh@58 -- # local i 00:21:43.127 07:01:49 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:43.127 07:01:49 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:43.127 07:01:49 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:43.127 07:01:49 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:43.127 07:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.127 07:01:49 -- common/autotest_common.sh@10 -- # set +x 00:21:43.127 07:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.127 07:01:49 -- target/shutdown.sh@60 -- # read_io_count=42 00:21:43.127 07:01:49 -- target/shutdown.sh@63 -- # '[' 42 -ge 100 ']' 00:21:43.127 07:01:49 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:43.127 07:01:50 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:43.127 07:01:50 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:43.127 07:01:50 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:43.127 07:01:50 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:43.127 07:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.127 07:01:50 -- common/autotest_common.sh@10 -- # set +x 00:21:43.127 07:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.127 07:01:50 -- target/shutdown.sh@60 -- # read_io_count=129 00:21:43.127 07:01:50 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:21:43.127 07:01:50 -- target/shutdown.sh@64 -- # ret=0 00:21:43.127 07:01:50 -- target/shutdown.sh@65 -- # break 00:21:43.127 07:01:50 -- target/shutdown.sh@69 -- # return 0 00:21:43.127 07:01:50 -- target/shutdown.sh@109 -- # killprocess 3091868 00:21:43.127 07:01:50 -- common/autotest_common.sh@926 -- # '[' -z 3091868 ']' 00:21:43.127 07:01:50 -- common/autotest_common.sh@930 -- # kill -0 3091868 00:21:43.127 07:01:50 -- common/autotest_common.sh@931 -- # uname 00:21:43.127 07:01:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:43.127 07:01:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3091868 00:21:43.386 07:01:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:43.386 07:01:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:43.386 07:01:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3091868' 00:21:43.386 killing process with pid 3091868 00:21:43.386 07:01:50 -- common/autotest_common.sh@945 -- # kill 3091868 00:21:43.386 07:01:50 -- common/autotest_common.sh@950 -- # wait 3091868 00:21:43.386 Received shutdown signal, test time was about 0.583844 seconds 00:21:43.386 00:21:43.386 Latency(us) 00:21:43.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.386 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.386 Verification LBA range: start 0x0 length 0x400 00:21:43.386 Nvme1n1 : 0.58 394.28 24.64 0.00 0.00 155458.72 27962.03 163111.82 00:21:43.386 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.386 Verification LBA range: start 0x0 length 0x400 00:21:43.386 Nvme2n1 : 0.58 329.54 20.60 0.00 0.00 180202.79 35535.08 164665.27 00:21:43.386 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.386 Verification LBA range: start 0x0 length 0x400 00:21:43.386 Nvme3n1 : 0.57 401.31 25.08 0.00 0.00 148858.91 23301.69 134373.07 00:21:43.386 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.386 Verification LBA range: start 0x0 length 0x400 00:21:43.386 Nvme4n1 : 0.57 402.63 25.16 0.00 0.00 145933.64 27962.03 118061.89 00:21:43.386 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.386 Verification LBA range: start 0x0 length 0x400 00:21:43.386 Nvme5n1 : 0.57 400.36 25.02 0.00 0.00 144988.50 24563.86 117285.17 00:21:43.386 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.386 Verification LBA range: start 0x0 length 0x400 00:21:43.386 Nvme6n1 : 0.56 338.41 21.15 0.00 0.00 167503.37 35535.08 145247.19 00:21:43.386 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.386 Verification LBA range: start 0x0 length 0x400 00:21:43.386 Nvme7n1 : 0.57 398.91 24.93 0.00 0.00 141453.00 25826.04 116508.44 00:21:43.386 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.386 Verification LBA range: start 0x0 length 0x400 00:21:43.386 Nvme8n1 : 0.57 397.98 24.87 0.00 0.00 139782.88 25631.86 117285.17 00:21:43.386 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.386 Verification LBA range: start 0x0 length 0x400 00:21:43.386 Nvme9n1 : 0.58 391.48 24.47 0.00 0.00 139722.84 28156.21 123498.95 00:21:43.386 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.386 Verification LBA range: start 0x0 length 0x400 00:21:43.386 Nvme10n1 : 0.58 327.64 20.48 0.00 0.00 163172.12 39612.87 133596.35 00:21:43.386 =================================================================================================================== 00:21:43.386 Total : 3782.54 236.41 0.00 0.00 151782.15 23301.69 164665.27 00:21:43.644 07:01:50 -- target/shutdown.sh@112 -- # sleep 1 00:21:44.574 07:01:51 -- target/shutdown.sh@113 -- # kill -0 3091555 00:21:44.574 07:01:51 -- target/shutdown.sh@115 -- # stoptarget 00:21:44.574 07:01:51 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:44.574 07:01:51 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:44.574 07:01:51 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:44.574 07:01:51 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:44.574 07:01:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:44.574 07:01:51 -- nvmf/common.sh@116 -- # sync 00:21:44.574 07:01:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:44.574 07:01:51 -- nvmf/common.sh@119 -- # set +e 00:21:44.574 07:01:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:44.574 07:01:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:44.574 rmmod nvme_tcp 00:21:44.574 rmmod nvme_fabrics 00:21:44.574 rmmod nvme_keyring 00:21:44.574 07:01:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:44.574 07:01:51 -- nvmf/common.sh@123 -- # set -e 00:21:44.574 07:01:51 -- nvmf/common.sh@124 -- # return 0 00:21:44.574 07:01:51 -- nvmf/common.sh@477 -- # '[' -n 3091555 ']' 00:21:44.574 07:01:51 -- nvmf/common.sh@478 -- # killprocess 3091555 00:21:44.574 07:01:51 -- common/autotest_common.sh@926 -- # '[' -z 3091555 ']' 00:21:44.574 07:01:51 -- common/autotest_common.sh@930 -- # kill -0 3091555 00:21:44.574 07:01:51 -- common/autotest_common.sh@931 -- # uname 00:21:44.574 07:01:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:44.574 07:01:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3091555 00:21:44.831 07:01:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:44.831 07:01:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:44.831 07:01:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3091555' 00:21:44.831 killing process with pid 3091555 00:21:44.831 07:01:51 -- common/autotest_common.sh@945 -- # kill 3091555 00:21:44.831 07:01:51 -- common/autotest_common.sh@950 -- # wait 3091555 00:21:45.396 07:01:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:45.396 07:01:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:45.396 07:01:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:45.396 07:01:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.396 07:01:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:45.396 07:01:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.396 07:01:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.396 07:01:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.297 07:01:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:47.297 00:21:47.297 real 0m7.935s 00:21:47.297 user 0m24.021s 00:21:47.297 sys 0m1.434s 00:21:47.297 07:01:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.297 07:01:54 -- common/autotest_common.sh@10 -- # set +x 00:21:47.297 ************************************ 00:21:47.297 END TEST nvmf_shutdown_tc2 00:21:47.297 ************************************ 00:21:47.297 07:01:54 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:47.297 07:01:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:47.297 07:01:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:47.297 07:01:54 -- common/autotest_common.sh@10 -- # set +x 00:21:47.297 ************************************ 00:21:47.297 START TEST nvmf_shutdown_tc3 00:21:47.297 ************************************ 00:21:47.297 07:01:54 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:21:47.297 07:01:54 -- target/shutdown.sh@120 -- # starttarget 00:21:47.297 07:01:54 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:47.297 07:01:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:47.297 07:01:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.297 07:01:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:47.297 07:01:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:47.297 07:01:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:47.297 07:01:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.297 07:01:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.297 07:01:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.297 07:01:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:47.298 07:01:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:47.298 07:01:54 -- common/autotest_common.sh@10 -- # set +x 00:21:47.298 07:01:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:47.298 07:01:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:47.298 07:01:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:47.298 07:01:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:47.298 07:01:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:47.298 07:01:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:47.298 07:01:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:47.298 07:01:54 -- nvmf/common.sh@294 -- # net_devs=() 00:21:47.298 07:01:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:47.298 07:01:54 -- nvmf/common.sh@295 -- # e810=() 00:21:47.298 07:01:54 -- nvmf/common.sh@295 -- # local -ga e810 00:21:47.298 07:01:54 -- nvmf/common.sh@296 -- # x722=() 00:21:47.298 07:01:54 -- nvmf/common.sh@296 -- # local -ga x722 00:21:47.298 07:01:54 -- nvmf/common.sh@297 -- # mlx=() 00:21:47.298 07:01:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:47.298 07:01:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.298 07:01:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.298 07:01:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.298 07:01:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.298 07:01:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.298 07:01:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.298 07:01:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.298 07:01:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.298 07:01:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.298 07:01:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.298 07:01:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.298 07:01:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:47.298 07:01:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:47.298 07:01:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:47.298 07:01:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:47.298 07:01:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:47.298 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:47.298 07:01:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:47.298 07:01:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:47.298 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:47.298 07:01:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:47.298 07:01:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:47.298 07:01:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.298 07:01:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:47.298 07:01:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.298 07:01:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:47.298 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:47.298 07:01:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.298 07:01:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:47.298 07:01:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.298 07:01:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:47.298 07:01:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.298 07:01:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:47.298 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:47.298 07:01:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.298 07:01:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:47.298 07:01:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:47.298 07:01:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:47.298 07:01:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:47.298 07:01:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.298 07:01:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.298 07:01:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.298 07:01:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:47.298 07:01:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.298 07:01:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.298 07:01:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:47.298 07:01:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.298 07:01:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.298 07:01:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:47.298 07:01:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:47.298 07:01:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.298 07:01:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.298 07:01:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.298 07:01:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.298 07:01:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:47.298 07:01:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.556 07:01:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.556 07:01:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.556 07:01:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:47.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:21:47.556 00:21:47.556 --- 10.0.0.2 ping statistics --- 00:21:47.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.556 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:21:47.556 07:01:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:21:47.557 00:21:47.557 --- 10.0.0.1 ping statistics --- 00:21:47.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.557 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:21:47.557 07:01:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.557 07:01:54 -- nvmf/common.sh@410 -- # return 0 00:21:47.557 07:01:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:47.557 07:01:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.557 07:01:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:47.557 07:01:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:47.557 07:01:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.557 07:01:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:47.557 07:01:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:47.557 07:01:54 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:47.557 07:01:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:47.557 07:01:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:47.557 07:01:54 -- common/autotest_common.sh@10 -- # set +x 00:21:47.557 07:01:54 -- nvmf/common.sh@469 -- # nvmfpid=3092680 00:21:47.557 07:01:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:47.557 07:01:54 -- nvmf/common.sh@470 -- # waitforlisten 3092680 00:21:47.557 07:01:54 -- common/autotest_common.sh@819 -- # '[' -z 3092680 ']' 00:21:47.557 07:01:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.557 07:01:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:47.557 07:01:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.557 07:01:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:47.557 07:01:54 -- common/autotest_common.sh@10 -- # set +x 00:21:47.557 [2024-05-12 07:01:54.543929] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:47.557 [2024-05-12 07:01:54.544001] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.557 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.557 [2024-05-12 07:01:54.607465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.815 [2024-05-12 07:01:54.720652] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:47.815 [2024-05-12 07:01:54.720806] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.815 [2024-05-12 07:01:54.720825] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.815 [2024-05-12 07:01:54.720839] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.815 [2024-05-12 07:01:54.720893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.815 [2024-05-12 07:01:54.720948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:47.815 [2024-05-12 07:01:54.720951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.815 [2024-05-12 07:01:54.720921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.750 07:01:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:48.750 07:01:55 -- common/autotest_common.sh@852 -- # return 0 00:21:48.750 07:01:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:48.750 07:01:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:48.750 07:01:55 -- common/autotest_common.sh@10 -- # set +x 00:21:48.750 07:01:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.750 07:01:55 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:48.750 07:01:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.750 07:01:55 -- common/autotest_common.sh@10 -- # set +x 00:21:48.750 [2024-05-12 07:01:55.541257] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.750 07:01:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.750 07:01:55 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:48.750 07:01:55 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:48.750 07:01:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:48.750 07:01:55 -- common/autotest_common.sh@10 -- # set +x 00:21:48.750 07:01:55 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:48.750 07:01:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:48.750 07:01:55 -- target/shutdown.sh@28 -- # cat 00:21:48.750 07:01:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:48.750 07:01:55 -- target/shutdown.sh@28 -- # cat 00:21:48.750 07:01:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:48.750 07:01:55 -- target/shutdown.sh@28 -- # cat 00:21:48.750 07:01:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:48.750 07:01:55 -- target/shutdown.sh@28 -- # cat 00:21:48.750 07:01:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:48.750 07:01:55 -- target/shutdown.sh@28 -- # cat 00:21:48.750 07:01:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:48.750 07:01:55 -- target/shutdown.sh@28 -- # cat 00:21:48.750 07:01:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:48.750 07:01:55 -- target/shutdown.sh@28 -- # cat 00:21:48.750 07:01:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:48.750 07:01:55 -- target/shutdown.sh@28 -- # cat 00:21:48.750 07:01:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:48.750 07:01:55 -- target/shutdown.sh@28 -- # cat 00:21:48.750 07:01:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:48.750 07:01:55 -- target/shutdown.sh@28 -- # cat 00:21:48.750 07:01:55 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:48.750 07:01:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.750 07:01:55 -- common/autotest_common.sh@10 -- # set +x 00:21:48.750 Malloc1 00:21:48.750 [2024-05-12 07:01:55.616341] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.750 Malloc2 00:21:48.750 Malloc3 00:21:48.750 Malloc4 00:21:48.750 Malloc5 00:21:48.750 Malloc6 00:21:48.750 Malloc7 00:21:49.011 Malloc8 00:21:49.011 Malloc9 00:21:49.011 Malloc10 00:21:49.011 07:01:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.011 07:01:56 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:49.011 07:01:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:49.011 07:01:56 -- common/autotest_common.sh@10 -- # set +x 00:21:49.011 07:01:56 -- target/shutdown.sh@124 -- # perfpid=3092992 00:21:49.011 07:01:56 -- target/shutdown.sh@125 -- # waitforlisten 3092992 /var/tmp/bdevperf.sock 00:21:49.011 07:01:56 -- common/autotest_common.sh@819 -- # '[' -z 3092992 ']' 00:21:49.011 07:01:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.011 07:01:56 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:49.011 07:01:56 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:49.011 07:01:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:49.011 07:01:56 -- nvmf/common.sh@520 -- # config=() 00:21:49.011 07:01:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.011 07:01:56 -- nvmf/common.sh@520 -- # local subsystem config 00:21:49.011 07:01:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:49.011 07:01:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.011 07:01:56 -- common/autotest_common.sh@10 -- # set +x 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.011 { 00:21:49.011 "params": { 00:21:49.011 "name": "Nvme$subsystem", 00:21:49.011 "trtype": "$TEST_TRANSPORT", 00:21:49.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.011 "adrfam": "ipv4", 00:21:49.011 "trsvcid": "$NVMF_PORT", 00:21:49.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.011 "hdgst": ${hdgst:-false}, 00:21:49.011 "ddgst": ${ddgst:-false} 00:21:49.011 }, 00:21:49.011 "method": "bdev_nvme_attach_controller" 00:21:49.011 } 00:21:49.011 EOF 00:21:49.011 )") 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # cat 00:21:49.011 07:01:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.011 { 00:21:49.011 "params": { 00:21:49.011 "name": "Nvme$subsystem", 00:21:49.011 "trtype": "$TEST_TRANSPORT", 00:21:49.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.011 "adrfam": "ipv4", 00:21:49.011 "trsvcid": "$NVMF_PORT", 00:21:49.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.011 "hdgst": ${hdgst:-false}, 00:21:49.011 "ddgst": ${ddgst:-false} 00:21:49.011 }, 00:21:49.011 "method": "bdev_nvme_attach_controller" 00:21:49.011 } 00:21:49.011 EOF 00:21:49.011 )") 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # cat 00:21:49.011 07:01:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.011 { 00:21:49.011 "params": { 00:21:49.011 "name": "Nvme$subsystem", 00:21:49.011 "trtype": "$TEST_TRANSPORT", 00:21:49.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.011 "adrfam": "ipv4", 00:21:49.011 "trsvcid": "$NVMF_PORT", 00:21:49.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.011 "hdgst": ${hdgst:-false}, 00:21:49.011 "ddgst": ${ddgst:-false} 00:21:49.011 }, 00:21:49.011 "method": "bdev_nvme_attach_controller" 00:21:49.011 } 00:21:49.011 EOF 00:21:49.011 )") 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # cat 00:21:49.011 07:01:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.011 { 00:21:49.011 "params": { 00:21:49.011 "name": "Nvme$subsystem", 00:21:49.011 "trtype": "$TEST_TRANSPORT", 00:21:49.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.011 "adrfam": "ipv4", 00:21:49.011 "trsvcid": "$NVMF_PORT", 00:21:49.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.011 "hdgst": ${hdgst:-false}, 00:21:49.011 "ddgst": ${ddgst:-false} 00:21:49.011 }, 00:21:49.011 "method": "bdev_nvme_attach_controller" 00:21:49.011 } 00:21:49.011 EOF 00:21:49.011 )") 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # cat 00:21:49.011 07:01:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.011 { 00:21:49.011 "params": { 00:21:49.011 "name": "Nvme$subsystem", 00:21:49.011 "trtype": "$TEST_TRANSPORT", 00:21:49.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.011 "adrfam": "ipv4", 00:21:49.011 "trsvcid": "$NVMF_PORT", 00:21:49.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.011 "hdgst": ${hdgst:-false}, 00:21:49.011 "ddgst": ${ddgst:-false} 00:21:49.011 }, 00:21:49.011 "method": "bdev_nvme_attach_controller" 00:21:49.011 } 00:21:49.011 EOF 00:21:49.011 )") 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # cat 00:21:49.011 07:01:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.011 { 00:21:49.011 "params": { 00:21:49.011 "name": "Nvme$subsystem", 00:21:49.011 "trtype": "$TEST_TRANSPORT", 00:21:49.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.011 "adrfam": "ipv4", 00:21:49.011 "trsvcid": "$NVMF_PORT", 00:21:49.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.011 "hdgst": ${hdgst:-false}, 00:21:49.011 "ddgst": ${ddgst:-false} 00:21:49.011 }, 00:21:49.011 "method": "bdev_nvme_attach_controller" 00:21:49.011 } 00:21:49.011 EOF 00:21:49.011 )") 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # cat 00:21:49.011 07:01:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.011 { 00:21:49.011 "params": { 00:21:49.011 "name": "Nvme$subsystem", 00:21:49.011 "trtype": "$TEST_TRANSPORT", 00:21:49.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.011 "adrfam": "ipv4", 00:21:49.011 "trsvcid": "$NVMF_PORT", 00:21:49.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.011 "hdgst": ${hdgst:-false}, 00:21:49.011 "ddgst": ${ddgst:-false} 00:21:49.011 }, 00:21:49.011 "method": "bdev_nvme_attach_controller" 00:21:49.011 } 00:21:49.011 EOF 00:21:49.011 )") 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # cat 00:21:49.011 07:01:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.011 { 00:21:49.011 "params": { 00:21:49.011 "name": "Nvme$subsystem", 00:21:49.011 "trtype": "$TEST_TRANSPORT", 00:21:49.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.011 "adrfam": "ipv4", 00:21:49.011 "trsvcid": "$NVMF_PORT", 00:21:49.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.011 "hdgst": ${hdgst:-false}, 00:21:49.011 "ddgst": ${ddgst:-false} 00:21:49.011 }, 00:21:49.011 "method": "bdev_nvme_attach_controller" 00:21:49.011 } 00:21:49.011 EOF 00:21:49.011 )") 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # cat 00:21:49.011 07:01:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.011 { 00:21:49.011 "params": { 00:21:49.011 "name": "Nvme$subsystem", 00:21:49.011 "trtype": "$TEST_TRANSPORT", 00:21:49.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.011 "adrfam": "ipv4", 00:21:49.011 "trsvcid": "$NVMF_PORT", 00:21:49.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.011 "hdgst": ${hdgst:-false}, 00:21:49.011 "ddgst": ${ddgst:-false} 00:21:49.011 }, 00:21:49.011 "method": "bdev_nvme_attach_controller" 00:21:49.011 } 00:21:49.011 EOF 00:21:49.011 )") 00:21:49.011 07:01:56 -- nvmf/common.sh@542 -- # cat 00:21:49.012 07:01:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.012 07:01:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.012 { 00:21:49.012 "params": { 00:21:49.012 "name": "Nvme$subsystem", 00:21:49.012 "trtype": "$TEST_TRANSPORT", 00:21:49.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.012 "adrfam": "ipv4", 00:21:49.012 "trsvcid": "$NVMF_PORT", 00:21:49.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.012 "hdgst": ${hdgst:-false}, 00:21:49.012 "ddgst": ${ddgst:-false} 00:21:49.012 }, 00:21:49.012 "method": "bdev_nvme_attach_controller" 00:21:49.012 } 00:21:49.012 EOF 00:21:49.012 )") 00:21:49.012 07:01:56 -- nvmf/common.sh@542 -- # cat 00:21:49.012 07:01:56 -- nvmf/common.sh@544 -- # jq . 00:21:49.012 07:01:56 -- nvmf/common.sh@545 -- # IFS=, 00:21:49.012 07:01:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:49.012 "params": { 00:21:49.012 "name": "Nvme1", 00:21:49.012 "trtype": "tcp", 00:21:49.012 "traddr": "10.0.0.2", 00:21:49.012 "adrfam": "ipv4", 00:21:49.012 "trsvcid": "4420", 00:21:49.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.012 "hdgst": false, 00:21:49.012 "ddgst": false 00:21:49.012 }, 00:21:49.012 "method": "bdev_nvme_attach_controller" 00:21:49.012 },{ 00:21:49.012 "params": { 00:21:49.012 "name": "Nvme2", 00:21:49.012 "trtype": "tcp", 00:21:49.012 "traddr": "10.0.0.2", 00:21:49.012 "adrfam": "ipv4", 00:21:49.012 "trsvcid": "4420", 00:21:49.012 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:49.012 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:49.012 "hdgst": false, 00:21:49.012 "ddgst": false 00:21:49.012 }, 00:21:49.012 "method": "bdev_nvme_attach_controller" 00:21:49.012 },{ 00:21:49.012 "params": { 00:21:49.012 "name": "Nvme3", 00:21:49.012 "trtype": "tcp", 00:21:49.012 "traddr": "10.0.0.2", 00:21:49.012 "adrfam": "ipv4", 00:21:49.012 "trsvcid": "4420", 00:21:49.012 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:49.012 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:49.012 "hdgst": false, 00:21:49.012 "ddgst": false 00:21:49.012 }, 00:21:49.012 "method": "bdev_nvme_attach_controller" 00:21:49.012 },{ 00:21:49.012 "params": { 00:21:49.012 "name": "Nvme4", 00:21:49.012 "trtype": "tcp", 00:21:49.012 "traddr": "10.0.0.2", 00:21:49.012 "adrfam": "ipv4", 00:21:49.012 "trsvcid": "4420", 00:21:49.012 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:49.012 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:49.012 "hdgst": false, 00:21:49.012 "ddgst": false 00:21:49.012 }, 00:21:49.012 "method": "bdev_nvme_attach_controller" 00:21:49.012 },{ 00:21:49.012 "params": { 00:21:49.012 "name": "Nvme5", 00:21:49.012 "trtype": "tcp", 00:21:49.012 "traddr": "10.0.0.2", 00:21:49.012 "adrfam": "ipv4", 00:21:49.012 "trsvcid": "4420", 00:21:49.012 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:49.012 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:49.012 "hdgst": false, 00:21:49.012 "ddgst": false 00:21:49.012 }, 00:21:49.012 "method": "bdev_nvme_attach_controller" 00:21:49.012 },{ 00:21:49.012 "params": { 00:21:49.012 "name": "Nvme6", 00:21:49.012 "trtype": "tcp", 00:21:49.012 "traddr": "10.0.0.2", 00:21:49.012 "adrfam": "ipv4", 00:21:49.012 "trsvcid": "4420", 00:21:49.012 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:49.012 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:49.012 "hdgst": false, 00:21:49.012 "ddgst": false 00:21:49.012 }, 00:21:49.012 "method": "bdev_nvme_attach_controller" 00:21:49.012 },{ 00:21:49.012 "params": { 00:21:49.012 "name": "Nvme7", 00:21:49.012 "trtype": "tcp", 00:21:49.012 "traddr": "10.0.0.2", 00:21:49.012 "adrfam": "ipv4", 00:21:49.012 "trsvcid": "4420", 00:21:49.012 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:49.012 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:49.012 "hdgst": false, 00:21:49.012 "ddgst": false 00:21:49.012 }, 00:21:49.012 "method": "bdev_nvme_attach_controller" 00:21:49.012 },{ 00:21:49.012 "params": { 00:21:49.012 "name": "Nvme8", 00:21:49.012 "trtype": "tcp", 00:21:49.012 "traddr": "10.0.0.2", 00:21:49.012 "adrfam": "ipv4", 00:21:49.012 "trsvcid": "4420", 00:21:49.012 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:49.012 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:49.012 "hdgst": false, 00:21:49.012 "ddgst": false 00:21:49.012 }, 00:21:49.012 "method": "bdev_nvme_attach_controller" 00:21:49.012 },{ 00:21:49.012 "params": { 00:21:49.012 "name": "Nvme9", 00:21:49.012 "trtype": "tcp", 00:21:49.012 "traddr": "10.0.0.2", 00:21:49.012 "adrfam": "ipv4", 00:21:49.012 "trsvcid": "4420", 00:21:49.012 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:49.012 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:49.012 "hdgst": false, 00:21:49.012 "ddgst": false 00:21:49.012 }, 00:21:49.012 "method": "bdev_nvme_attach_controller" 00:21:49.012 },{ 00:21:49.012 "params": { 00:21:49.012 "name": "Nvme10", 00:21:49.012 "trtype": "tcp", 00:21:49.012 "traddr": "10.0.0.2", 00:21:49.012 "adrfam": "ipv4", 00:21:49.012 "trsvcid": "4420", 00:21:49.012 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:49.012 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:49.012 "hdgst": false, 00:21:49.012 "ddgst": false 00:21:49.012 }, 00:21:49.012 "method": "bdev_nvme_attach_controller" 00:21:49.012 }' 00:21:49.012 [2024-05-12 07:01:56.112355] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:49.012 [2024-05-12 07:01:56.112438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092992 ] 00:21:49.270 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.270 [2024-05-12 07:01:56.177215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.270 [2024-05-12 07:01:56.285054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.174 Running I/O for 10 seconds... 00:21:51.174 07:01:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:51.174 07:01:57 -- common/autotest_common.sh@852 -- # return 0 00:21:51.174 07:01:57 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:51.174 07:01:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.174 07:01:57 -- common/autotest_common.sh@10 -- # set +x 00:21:51.174 07:01:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.174 07:01:57 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:51.174 07:01:57 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:51.174 07:01:57 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:51.174 07:01:57 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:51.174 07:01:57 -- target/shutdown.sh@57 -- # local ret=1 00:21:51.174 07:01:57 -- target/shutdown.sh@58 -- # local i 00:21:51.174 07:01:57 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:51.174 07:01:57 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:51.174 07:01:57 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:51.174 07:01:57 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:51.174 07:01:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.174 07:01:57 -- common/autotest_common.sh@10 -- # set +x 00:21:51.174 07:01:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.174 07:01:57 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:51.174 07:01:57 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:51.174 07:01:57 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:51.174 07:01:58 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:51.174 07:01:58 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:51.174 07:01:58 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:51.174 07:01:58 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:51.174 07:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.174 07:01:58 -- common/autotest_common.sh@10 -- # set +x 00:21:51.174 07:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.174 07:01:58 -- target/shutdown.sh@60 -- # read_io_count=129 00:21:51.174 07:01:58 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:21:51.174 07:01:58 -- target/shutdown.sh@64 -- # ret=0 00:21:51.174 07:01:58 -- target/shutdown.sh@65 -- # break 00:21:51.174 07:01:58 -- target/shutdown.sh@69 -- # return 0 00:21:51.174 07:01:58 -- target/shutdown.sh@134 -- # killprocess 3092680 00:21:51.174 07:01:58 -- common/autotest_common.sh@926 -- # '[' -z 3092680 ']' 00:21:51.174 07:01:58 -- common/autotest_common.sh@930 -- # kill -0 3092680 00:21:51.174 07:01:58 -- common/autotest_common.sh@931 -- # uname 00:21:51.174 07:01:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:51.174 07:01:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3092680 00:21:51.174 07:01:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:51.174 07:01:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:51.174 07:01:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3092680' 00:21:51.174 killing process with pid 3092680 00:21:51.174 07:01:58 -- common/autotest_common.sh@945 -- # kill 3092680 00:21:51.174 07:01:58 -- common/autotest_common.sh@950 -- # wait 3092680 00:21:51.174 [2024-05-12 07:01:58.282816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.282896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.282912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.282926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.282943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.282956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.282969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.282985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.283988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.284000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.284018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.284030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.284042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.284371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.284388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.284401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaffc60 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.286285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb024c0 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.174 [2024-05-12 07:01:58.287726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.287987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.288002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.288014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.288026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.288038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.288050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.288062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.288073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00110 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.288945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.288979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.288994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.289755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb005c0 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.291058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.175 [2024-05-12 07:01:58.291096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.291872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00f00 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.292944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.292969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.292981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.292993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.176 [2024-05-12 07:01:58.293402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-05-12 07:01:58.293535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 he state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.293561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.293586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with t[2024-05-12 07:01:58.293590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:21:51.177 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.293604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.293616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.293629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.293641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.293654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26eb8c0 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.293738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.293763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.293776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.293788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.293801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.293818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.293831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.293843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248da50 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-05-12 07:01:58.293905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 he state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01260 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.293920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.293935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.293949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.293962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.293975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.293989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.294004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.294017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252a210 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.294061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.294081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.294096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.294109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.294138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.294152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.294166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.294179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.294192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252cf70 is same with the state(5) to be set 00:21:51.177 [2024-05-12 07:01:58.294246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.177 [2024-05-12 07:01:58.294266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.177 [2024-05-12 07:01:58.294281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.178 [2024-05-12 07:01:58.294294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.294308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.178 [2024-05-12 07:01:58.294321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.294334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.178 [2024-05-12 07:01:58.294347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.294359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25654f0 is same with the state(5) to be set 00:21:51.178 [2024-05-12 07:01:58.294416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.178 [2024-05-12 07:01:58.294437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.294452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.178 [2024-05-12 07:01:58.294465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.294479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.178 [2024-05-12 07:01:58.294492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.294506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.178 [2024-05-12 07:01:58.294519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.294532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f0a60 is same with the state(5) to be set 00:21:51.178 [2024-05-12 07:01:58.294918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.294943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.294968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.294989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01710 is same with the state(5) to be set 00:21:51.178 [2024-05-12 07:01:58.295159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.178 [2024-05-12 07:01:58.295718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.178 [2024-05-12 07:01:58.295733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.295760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.295775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.295790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.295804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.295820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.295821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with t[2024-05-12 07:01:58.295834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:51.179 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.295852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.295853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.295866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.295868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.295882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.295882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.295894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.295897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.295906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.295913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.295919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.295927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.295932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.295943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21248 len:12[2024-05-12 07:01:58.295944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 he state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.295958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with t[2024-05-12 07:01:58.295958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:51.179 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.295972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.295976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.295984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.295999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with t[2024-05-12 07:01:58.296000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:51.179 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.296013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.296026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.296038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.296051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-12 07:01:58.296064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 he state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.296089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.296101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.296113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.296126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.296151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.296164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.296188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.296202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.296215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.296228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21632 len:12[2024-05-12 07:01:58.296240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 he state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with t[2024-05-12 07:01:58.296253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:51.179 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.296268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.296281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.296293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.296306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-12 07:01:58.296319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 he state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.296344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.296357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.296373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.179 [2024-05-12 07:01:58.296386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.179 [2024-05-12 07:01:58.296394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.179 [2024-05-12 07:01:58.296398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with t[2024-05-12 07:01:58.296424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22400 len:12he state(5) to be set 00:21:51.180 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:1[2024-05-12 07:01:58.296487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 he state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with t[2024-05-12 07:01:58.296500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:51.180 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23040 len:1[2024-05-12 07:01:58.296580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 he state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with t[2024-05-12 07:01:58.296594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:51.180 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01ba0 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.296662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.296921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.296934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.297056] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2519e70 was disconnected and freed. reset controller. 00:21:51.180 [2024-05-12 07:01:58.297194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.297216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.297236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.297252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.297268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.297283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.297298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.297312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.297328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.297342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.297357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.297372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.297387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.297402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.297406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t[2024-05-12 07:01:58.297417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:12he state(5) to be set 00:21:51.180 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.297436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.297438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.297453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t[2024-05-12 07:01:58.297452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24960 len:1he state(5) to be set 00:21:51.180 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.297467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.297469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.297479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.297484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.180 [2024-05-12 07:01:58.297492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.180 [2024-05-12 07:01:58.297499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.180 [2024-05-12 07:01:58.297504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:128[2024-05-12 07:01:58.297517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.181 he state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t[2024-05-12 07:01:58.297531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:51.181 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.181 [2024-05-12 07:01:58.297545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.181 [2024-05-12 07:01:58.297557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.181 [2024-05-12 07:01:58.297570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.181 [2024-05-12 07:01:58.297583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.181 [2024-05-12 07:01:58.297595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t[2024-05-12 07:01:58.297608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:128he state(5) to be set 00:21:51.181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.181 [2024-05-12 07:01:58.297626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.181 [2024-05-12 07:01:58.297640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.181 [2024-05-12 07:01:58.297652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.181 [2024-05-12 07:01:58.297665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20480 len:12[2024-05-12 07:01:58.297678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.181 he state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t[2024-05-12 07:01:58.297692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:51.181 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.181 [2024-05-12 07:01:58.297715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.181 [2024-05-12 07:01:58.297728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.181 [2024-05-12 07:01:58.297751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.181 [2024-05-12 07:01:58.297764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.181 [2024-05-12 07:01:58.297776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.181 [2024-05-12 07:01:58.297789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.181 [2024-05-12 07:01:58.297800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-12 07:01:58.297802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.181 he state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.454 [2024-05-12 07:01:58.297828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.454 [2024-05-12 07:01:58.297841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21376 len:12[2024-05-12 07:01:58.297853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.454 he state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-12 07:01:58.297867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.454 he state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.454 [2024-05-12 07:01:58.297896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.454 [2024-05-12 07:01:58.297909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.454 [2024-05-12 07:01:58.297922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.454 [2024-05-12 07:01:58.297935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22272 len:12[2024-05-12 07:01:58.297948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.454 he state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-12 07:01:58.297962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.454 he state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.454 [2024-05-12 07:01:58.297989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.297993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.454 [2024-05-12 07:01:58.298005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.298009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.454 [2024-05-12 07:01:58.298018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.298027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.454 [2024-05-12 07:01:58.298031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.298043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25216 len:12[2024-05-12 07:01:58.298044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.454 he state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.298059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t[2024-05-12 07:01:58.298059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:51.454 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.454 [2024-05-12 07:01:58.298073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.298077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.454 [2024-05-12 07:01:58.298086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.454 [2024-05-12 07:01:58.298091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.454 [2024-05-12 07:01:58.298099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.455 [2024-05-12 07:01:58.298108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.455 [2024-05-12 07:01:58.298122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.455 [2024-05-12 07:01:58.298137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t[2024-05-12 07:01:58.298137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23680 len:12he state(5) to be set 00:21:51.455 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.455 [2024-05-12 07:01:58.298153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.455 [2024-05-12 07:01:58.298169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.455 [2024-05-12 07:01:58.298184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.455 [2024-05-12 07:01:58.298199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23936 len:12[2024-05-12 07:01:58.298203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 he state(5) to be set 00:21:51.455 [2024-05-12 07:01:58.298218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with t[2024-05-12 07:01:58.298218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:51.455 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.455 [2024-05-12 07:01:58.298235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.455 [2024-05-12 07:01:58.298250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.455 [2024-05-12 07:01:58.298266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02030 is same with the state(5) to be set 00:21:51.455 [2024-05-12 07:01:58.298280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.298978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.298993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.299007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.299023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.299036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.299051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.299065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.299080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.299094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.455 [2024-05-12 07:01:58.299109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.455 [2024-05-12 07:01:58.299123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.299138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.299152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.299167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.299181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.299195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a3820 is same with the state(5) to be set 00:21:51.456 [2024-05-12 07:01:58.299264] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26a3820 was disconnected and freed. reset controller. 00:21:51.456 [2024-05-12 07:01:58.315991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523e60 is same with the state(5) to be set 00:21:51.456 [2024-05-12 07:01:58.316225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26eb8c0 (9): Bad file descriptor 00:21:51.456 [2024-05-12 07:01:58.316260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248da50 (9): Bad file descriptor 00:21:51.456 [2024-05-12 07:01:58.316284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252a210 (9): Bad file descriptor 00:21:51.456 [2024-05-12 07:01:58.316311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252cf70 (9): Bad file descriptor 00:21:51.456 [2024-05-12 07:01:58.316363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26eb490 is same with the state(5) to be set 00:21:51.456 [2024-05-12 07:01:58.316506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25654f0 (9): Bad file descriptor 00:21:51.456 [2024-05-12 07:01:58.316558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523a30 is same with the state(5) to be set 00:21:51.456 [2024-05-12 07:01:58.316713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26f0a60 (9): Bad file descriptor 00:21:51.456 [2024-05-12 07:01:58.316768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.456 [2024-05-12 07:01:58.316869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.316884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ae830 is same with the state(5) to be set 00:21:51.456 [2024-05-12 07:01:58.319545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.319978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.319992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.320011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.320025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.320041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.320054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.320070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.456 [2024-05-12 07:01:58.320083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.456 [2024-05-12 07:01:58.320099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.320980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.320995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.321009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.321032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.321046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.321062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.321076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.321092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.321106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.321121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.321136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.321151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.321165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.321181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.321196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.321212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.321226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.321242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.321256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.321272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.321286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.321305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.321320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.321336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.457 [2024-05-12 07:01:58.321350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.457 [2024-05-12 07:01:58.321367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.321381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.321396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.321410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.321425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.321439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.321455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.321469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.321485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.321499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.321515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.321529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.321544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266c7f0 is same with the state(5) to be set 00:21:51.458 [2024-05-12 07:01:58.321638] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x266c7f0 was disconnected and freed. reset controller. 00:21:51.458 [2024-05-12 07:01:58.321854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.321877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.321899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.321915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.321932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.321946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.321962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.321980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.321997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.458 [2024-05-12 07:01:58.322652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.458 [2024-05-12 07:01:58.322668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.322682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.322705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.322721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.322737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.322752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.322772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.322786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.322802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.322816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.322833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.322847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.322863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.322877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.322893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.322907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.322923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.322937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.322953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.322967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.322983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.322997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.459 [2024-05-12 07:01:58.323826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.459 [2024-05-12 07:01:58.323841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x3724b70 is same with the state(5) to be set 00:21:51.459 [2024-05-12 07:01:58.324998] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x3724b70 was disconnected and freed. reset controller. 00:21:51.459 [2024-05-12 07:01:58.325037] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:51.459 [2024-05-12 07:01:58.325063] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:51.459 [2024-05-12 07:01:58.327942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.459 [2024-05-12 07:01:58.328103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.459 [2024-05-12 07:01:58.328136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26f0a60 with addr=10.0.0.2, port=4420 00:21:51.460 [2024-05-12 07:01:58.328157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f0a60 is same with the state(5) to be set 00:21:51.460 [2024-05-12 07:01:58.328302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.460 [2024-05-12 07:01:58.328477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.460 [2024-05-12 07:01:58.328502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248da50 with addr=10.0.0.2, port=4420 00:21:51.460 [2024-05-12 07:01:58.328518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248da50 is same with the state(5) to be set 00:21:51.460 [2024-05-12 07:01:58.328548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2523e60 (9): Bad file descriptor 00:21:51.460 [2024-05-12 07:01:58.328594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26eb490 (9): Bad file descriptor 00:21:51.460 [2024-05-12 07:01:58.328625] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.460 [2024-05-12 07:01:58.328646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2523a30 (9): Bad file descriptor 00:21:51.460 [2024-05-12 07:01:58.328675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ae830 (9): Bad file descriptor 00:21:51.460 [2024-05-12 07:01:58.329423] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:51.460 [2024-05-12 07:01:58.330569] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:51.460 [2024-05-12 07:01:58.330602] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:51.460 [2024-05-12 07:01:58.330641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26f0a60 (9): Bad file descriptor 00:21:51.460 [2024-05-12 07:01:58.330663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248da50 (9): Bad file descriptor 00:21:51.460 [2024-05-12 07:01:58.330751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.330776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.330799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.330815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.330832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.330846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.330862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.330876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.330891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.330906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.330921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.330941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.330957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.330971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.330986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.460 [2024-05-12 07:01:58.331799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.460 [2024-05-12 07:01:58.331814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.331828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.331844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.331858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.331874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.331888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.331904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.331917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.331933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.331947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.331962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.331976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.331991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.332677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.332691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.333936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.333959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.333981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.333997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.334013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.334027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.334044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.334058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.334074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.334095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.334111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.334126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.334153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.334167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.334183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.334197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.334213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.334227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.334243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.334258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.461 [2024-05-12 07:01:58.334273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.461 [2024-05-12 07:01:58.334287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.334973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.334987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.462 [2024-05-12 07:01:58.335515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.462 [2024-05-12 07:01:58.335529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.335907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.335920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.463 [2024-05-12 07:01:58.337823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.463 [2024-05-12 07:01:58.337838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.337852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.337867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.337881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.337896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.337909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.337925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.337939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.337954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.337967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.337982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.337996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.338976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.338991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.339005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.464 [2024-05-12 07:01:58.339020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.464 [2024-05-12 07:01:58.339033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.339048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.339062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.339077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.339091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.339105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2669c30 is same with the state(5) to be set 00:21:51.465 [2024-05-12 07:01:58.340374] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:51.465 [2024-05-12 07:01:58.340799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.340827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.340851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.340867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.340883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.340898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.340915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.340929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.340945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.340966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.340982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.340996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.465 [2024-05-12 07:01:58.341683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.465 [2024-05-12 07:01:58.341717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266ddd0 is same with the state(5) to be set 00:21:51.465 [2024-05-12 07:01:58.341799] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x266ddd0 was disconnected and freed. reset controller. 00:21:51.465 [2024-05-12 07:01:58.341879] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:51.465 [2024-05-12 07:01:58.342187] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:51.465 [2024-05-12 07:01:58.342216] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:51.465 [2024-05-12 07:01:58.342235] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:51.465 [2024-05-12 07:01:58.342637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.465 [2024-05-12 07:01:58.342798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.465 [2024-05-12 07:01:58.342824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2523e60 with addr=10.0.0.2, port=4420 00:21:51.465 [2024-05-12 07:01:58.342841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523e60 is same with the state(5) to be set 00:21:51.465 [2024-05-12 07:01:58.343016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.465 [2024-05-12 07:01:58.343173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.465 [2024-05-12 07:01:58.343198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25654f0 with addr=10.0.0.2, port=4420 00:21:51.465 [2024-05-12 07:01:58.343213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25654f0 is same with the state(5) to be set 00:21:51.465 [2024-05-12 07:01:58.343229] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:51.465 [2024-05-12 07:01:58.343242] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:51.465 [2024-05-12 07:01:58.343258] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:51.465 [2024-05-12 07:01:58.343280] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:51.465 [2024-05-12 07:01:58.343294] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:51.465 [2024-05-12 07:01:58.343307] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:51.465 [2024-05-12 07:01:58.343356] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.465 [2024-05-12 07:01:58.343381] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.465 [2024-05-12 07:01:58.343414] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.465 [2024-05-12 07:01:58.343448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25654f0 (9): Bad file descriptor 00:21:51.466 [2024-05-12 07:01:58.343474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2523e60 (9): Bad file descriptor 00:21:51.466 [2024-05-12 07:01:58.344576] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.466 [2024-05-12 07:01:58.344600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.466 [2024-05-12 07:01:58.344623] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:51.466 [2024-05-12 07:01:58.344807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.466 [2024-05-12 07:01:58.344958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.466 [2024-05-12 07:01:58.344983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252a210 with addr=10.0.0.2, port=4420 00:21:51.466 [2024-05-12 07:01:58.344998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252a210 is same with the state(5) to be set 00:21:51.466 [2024-05-12 07:01:58.345140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.466 [2024-05-12 07:01:58.345299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.466 [2024-05-12 07:01:58.345324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x252cf70 with addr=10.0.0.2, port=4420 00:21:51.466 [2024-05-12 07:01:58.345340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252cf70 is same with the state(5) to be set 00:21:51.466 [2024-05-12 07:01:58.345508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.466 [2024-05-12 07:01:58.345655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.466 [2024-05-12 07:01:58.345680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26eb8c0 with addr=10.0.0.2, port=4420 00:21:51.466 [2024-05-12 07:01:58.345700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26eb8c0 is same with the state(5) to be set 00:21:51.466 [2024-05-12 07:01:58.346556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.346984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.346997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.466 [2024-05-12 07:01:58.347582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.466 [2024-05-12 07:01:58.347595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.347980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.347993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.348489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.348503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b210 is same with the state(5) to be set 00:21:51.467 [2024-05-12 07:01:58.349765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.349788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.349809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.349825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.349841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.349855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.467 [2024-05-12 07:01:58.349870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.467 [2024-05-12 07:01:58.349884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.349900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.349914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.349935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.349950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.349966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.349980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.349996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.350973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.350987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.351003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.351017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.351032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.351046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.351062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.351079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.351095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.351109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.468 [2024-05-12 07:01:58.351125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.468 [2024-05-12 07:01:58.351139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.469 [2024-05-12 07:01:58.351709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.469 [2024-05-12 07:01:58.351724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x3581fa0 is same with the state(5) to be set 00:21:51.469 [2024-05-12 07:01:58.353994] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:51.469 task offset: 18944 on job bdev=Nvme2n1 fails 00:21:51.469 00:21:51.469 Latency(us) 00:21:51.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.469 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.469 Job: Nvme1n1 ended in about 0.53 seconds with error 00:21:51.469 Verification LBA range: start 0x0 length 0x400 00:21:51.469 Nvme1n1 : 0.53 237.45 14.84 120.61 0.00 177129.63 115731.72 153014.42 00:21:51.469 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.469 Job: Nvme2n1 ended in about 0.51 seconds with error 00:21:51.469 Verification LBA range: start 0x0 length 0x400 00:21:51.469 Nvme2n1 : 0.51 248.54 15.53 124.27 0.00 167151.19 14563.56 183306.62 00:21:51.469 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.469 Job: Nvme3n1 ended in about 0.53 seconds with error 00:21:51.469 Verification LBA range: start 0x0 length 0x400 00:21:51.469 Nvme3n1 : 0.53 236.02 14.75 119.88 0.00 172575.58 116508.44 142140.30 00:21:51.469 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.469 Job: Nvme4n1 ended in about 0.52 seconds with error 00:21:51.469 Verification LBA range: start 0x0 length 0x400 00:21:51.469 Nvme4n1 : 0.52 317.69 19.86 123.98 0.00 136706.65 25437.68 127382.57 00:21:51.469 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.469 Job: Nvme5n1 ended in about 0.54 seconds with error 00:21:51.469 Verification LBA range: start 0x0 length 0x400 00:21:51.469 Nvme5n1 : 0.54 234.63 14.66 119.17 0.00 168597.94 113401.55 135926.52 00:21:51.469 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.469 Job: Nvme6n1 ended in about 0.55 seconds with error 00:21:51.469 Verification LBA range: start 0x0 length 0x400 00:21:51.469 Nvme6n1 : 0.55 230.60 14.41 117.13 0.00 169241.80 97867.09 146023.92 00:21:51.469 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.469 Job: Nvme7n1 ended in about 0.52 seconds with error 00:21:51.469 Verification LBA range: start 0x0 length 0x400 00:21:51.469 Nvme7n1 : 0.52 313.34 19.58 122.28 0.00 132454.90 19029.71 129712.73 00:21:51.469 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.469 Job: Nvme8n1 ended in about 0.54 seconds with error 00:21:51.469 Verification LBA range: start 0x0 length 0x400 00:21:51.469 Nvme8n1 : 0.54 299.25 18.70 51.72 0.00 160002.38 13398.47 153014.42 00:21:51.469 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.469 Job: Nvme9n1 ended in about 0.55 seconds with error 00:21:51.469 Verification LBA range: start 0x0 length 0x400 00:21:51.469 Nvme9n1 : 0.55 229.25 14.33 116.45 0.00 162825.66 100197.26 139810.13 00:21:51.469 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.469 Job: Nvme10n1 ended in about 0.52 seconds with error 00:21:51.469 Verification LBA range: start 0x0 length 0x400 00:21:51.469 Nvme10n1 : 0.52 240.19 15.01 122.00 0.00 151922.28 11165.39 167772.16 00:21:51.469 =================================================================================================================== 00:21:51.469 Total : 2586.96 161.69 1137.50 0.00 158896.85 11165.39 183306.62 00:21:51.469 [2024-05-12 07:01:58.383366] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:51.469 [2024-05-12 07:01:58.383452] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:51.469 [2024-05-12 07:01:58.383783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.469 [2024-05-12 07:01:58.383942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.469 [2024-05-12 07:01:58.383969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2523a30 with addr=10.0.0.2, port=4420 00:21:51.469 [2024-05-12 07:01:58.383990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523a30 is same with the state(5) to be set 00:21:51.469 [2024-05-12 07:01:58.384021] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252a210 (9): Bad file descriptor 00:21:51.469 [2024-05-12 07:01:58.384047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252cf70 (9): Bad file descriptor 00:21:51.469 [2024-05-12 07:01:58.384065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26eb8c0 (9): Bad file descriptor 00:21:51.469 [2024-05-12 07:01:58.384082] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:51.469 [2024-05-12 07:01:58.384096] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:51.469 [2024-05-12 07:01:58.384125] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:51.469 [2024-05-12 07:01:58.384152] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:51.469 [2024-05-12 07:01:58.384167] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:51.469 [2024-05-12 07:01:58.384180] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:51.469 [2024-05-12 07:01:58.384249] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.469 [2024-05-12 07:01:58.384278] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.469 [2024-05-12 07:01:58.384297] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.469 [2024-05-12 07:01:58.384316] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.470 [2024-05-12 07:01:58.384335] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.470 [2024-05-12 07:01:58.384773] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.470 [2024-05-12 07:01:58.384797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.470 [2024-05-12 07:01:58.385002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.470 [2024-05-12 07:01:58.385158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.470 [2024-05-12 07:01:58.385185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26eb490 with addr=10.0.0.2, port=4420 00:21:51.470 [2024-05-12 07:01:58.385201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26eb490 is same with the state(5) to be set 00:21:51.470 [2024-05-12 07:01:58.385350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.470 [2024-05-12 07:01:58.385500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.470 [2024-05-12 07:01:58.385525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26ae830 with addr=10.0.0.2, port=4420 00:21:51.470 [2024-05-12 07:01:58.385540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ae830 is same with the state(5) to be set 00:21:51.470 [2024-05-12 07:01:58.385559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2523a30 (9): Bad file descriptor 00:21:51.470 [2024-05-12 07:01:58.385576] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:51.470 [2024-05-12 07:01:58.385589] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:51.470 [2024-05-12 07:01:58.385602] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:51.470 [2024-05-12 07:01:58.385620] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:51.470 [2024-05-12 07:01:58.385634] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:51.470 [2024-05-12 07:01:58.385647] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:51.470 [2024-05-12 07:01:58.385663] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:51.470 [2024-05-12 07:01:58.385677] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:51.470 [2024-05-12 07:01:58.385689] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:51.470 [2024-05-12 07:01:58.385719] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.470 [2024-05-12 07:01:58.385752] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.470 [2024-05-12 07:01:58.385778] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.470 [2024-05-12 07:01:58.385808] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.470 [2024-05-12 07:01:58.385826] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.470 [2024-05-12 07:01:58.385843] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:51.470 [2024-05-12 07:01:58.386441] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:51.470 [2024-05-12 07:01:58.386468] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:51.470 [2024-05-12 07:01:58.386506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.470 [2024-05-12 07:01:58.386522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.470 [2024-05-12 07:01:58.386533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.470 [2024-05-12 07:01:58.386571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26eb490 (9): Bad file descriptor 00:21:51.470 [2024-05-12 07:01:58.386595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ae830 (9): Bad file descriptor 00:21:51.470 [2024-05-12 07:01:58.386610] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:51.470 [2024-05-12 07:01:58.386623] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:51.470 [2024-05-12 07:01:58.386635] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:51.470 [2024-05-12 07:01:58.386713] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:51.470 [2024-05-12 07:01:58.386738] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:51.470 [2024-05-12 07:01:58.386754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.470 [2024-05-12 07:01:58.386911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.470 [2024-05-12 07:01:58.387064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.470 [2024-05-12 07:01:58.387090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248da50 with addr=10.0.0.2, port=4420 00:21:51.470 [2024-05-12 07:01:58.387106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248da50 is same with the state(5) to be set 00:21:51.470 [2024-05-12 07:01:58.387244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.470 [2024-05-12 07:01:58.387390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.470 [2024-05-12 07:01:58.387415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26f0a60 with addr=10.0.0.2, port=4420 00:21:51.470 [2024-05-12 07:01:58.387430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f0a60 is same with the state(5) to be set 00:21:51.470 [2024-05-12 07:01:58.387444] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:51.470 [2024-05-12 07:01:58.387457] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:51.470 [2024-05-12 07:01:58.387469] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:51.470 [2024-05-12 07:01:58.387486] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:51.470 [2024-05-12 07:01:58.387499] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:51.470 [2024-05-12 07:01:58.387511] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:51.470 [2024-05-12 07:01:58.387569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.470 [2024-05-12 07:01:58.387589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.470 [2024-05-12 07:01:58.387729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.470 [2024-05-12 07:01:58.387871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.470 [2024-05-12 07:01:58.387896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25654f0 with addr=10.0.0.2, port=4420 00:21:51.470 [2024-05-12 07:01:58.387911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25654f0 is same with the state(5) to be set 00:21:51.470 [2024-05-12 07:01:58.388056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.470 [2024-05-12 07:01:58.388209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.470 [2024-05-12 07:01:58.388234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2523e60 with addr=10.0.0.2, port=4420 00:21:51.470 [2024-05-12 07:01:58.388249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523e60 is same with the state(5) to be set 00:21:51.470 [2024-05-12 07:01:58.388267] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248da50 (9): Bad file descriptor 00:21:51.470 [2024-05-12 07:01:58.388285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26f0a60 (9): Bad file descriptor 00:21:51.470 [2024-05-12 07:01:58.388327] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25654f0 (9): Bad file descriptor 00:21:51.470 [2024-05-12 07:01:58.388350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2523e60 (9): Bad file descriptor 00:21:51.470 [2024-05-12 07:01:58.388366] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:51.470 [2024-05-12 07:01:58.388378] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:51.470 [2024-05-12 07:01:58.388390] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:51.470 [2024-05-12 07:01:58.388406] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:51.470 [2024-05-12 07:01:58.388419] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:51.470 [2024-05-12 07:01:58.388431] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:51.470 [2024-05-12 07:01:58.388467] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.470 [2024-05-12 07:01:58.388485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.470 [2024-05-12 07:01:58.388498] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:51.470 [2024-05-12 07:01:58.388511] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:51.470 [2024-05-12 07:01:58.388523] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:51.470 [2024-05-12 07:01:58.388538] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:51.470 [2024-05-12 07:01:58.388551] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:51.470 [2024-05-12 07:01:58.388564] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:51.470 [2024-05-12 07:01:58.388600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.470 [2024-05-12 07:01:58.388617] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.059 07:01:58 -- target/shutdown.sh@135 -- # nvmfpid= 00:21:52.059 07:01:58 -- target/shutdown.sh@138 -- # sleep 1 00:21:52.997 07:01:59 -- target/shutdown.sh@141 -- # kill -9 3092992 00:21:52.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (3092992) - No such process 00:21:52.997 07:01:59 -- target/shutdown.sh@141 -- # true 00:21:52.997 07:01:59 -- target/shutdown.sh@143 -- # stoptarget 00:21:52.997 07:01:59 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:52.997 07:01:59 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:52.997 07:01:59 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:52.997 07:01:59 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:52.997 07:01:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:52.997 07:01:59 -- nvmf/common.sh@116 -- # sync 00:21:52.997 07:01:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:52.997 07:01:59 -- nvmf/common.sh@119 -- # set +e 00:21:52.997 07:01:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:52.997 07:01:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:52.997 rmmod nvme_tcp 00:21:52.997 rmmod nvme_fabrics 00:21:52.997 rmmod nvme_keyring 00:21:52.997 07:01:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:52.997 07:01:59 -- nvmf/common.sh@123 -- # set -e 00:21:52.997 07:01:59 -- nvmf/common.sh@124 -- # return 0 00:21:52.997 07:01:59 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:21:52.997 07:01:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:52.997 07:01:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:52.997 07:01:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:52.997 07:01:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.997 07:01:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:52.997 07:01:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.997 07:01:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.997 07:01:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.900 07:02:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:54.900 00:21:54.900 real 0m7.681s 00:21:54.900 user 0m18.855s 00:21:54.900 sys 0m1.369s 00:21:54.900 07:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:54.900 07:02:02 -- common/autotest_common.sh@10 -- # set +x 00:21:54.900 ************************************ 00:21:54.900 END TEST nvmf_shutdown_tc3 00:21:54.900 ************************************ 00:21:54.900 07:02:02 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:21:54.900 00:21:54.900 real 0m27.729s 00:21:54.900 user 1m17.535s 00:21:54.900 sys 0m6.184s 00:21:54.900 07:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:54.900 07:02:02 -- common/autotest_common.sh@10 -- # set +x 00:21:54.900 ************************************ 00:21:54.900 END TEST nvmf_shutdown 00:21:54.900 ************************************ 00:21:55.161 07:02:02 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:21:55.161 07:02:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:55.161 07:02:02 -- common/autotest_common.sh@10 -- # set +x 00:21:55.161 07:02:02 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:21:55.161 07:02:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:55.161 07:02:02 -- common/autotest_common.sh@10 -- # set +x 00:21:55.161 07:02:02 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:21:55.161 07:02:02 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:55.161 07:02:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:55.161 07:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:55.161 07:02:02 -- common/autotest_common.sh@10 -- # set +x 00:21:55.161 ************************************ 00:21:55.161 START TEST nvmf_multicontroller 00:21:55.161 ************************************ 00:21:55.161 07:02:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:55.161 * Looking for test storage... 00:21:55.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:55.161 07:02:02 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.161 07:02:02 -- nvmf/common.sh@7 -- # uname -s 00:21:55.161 07:02:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.161 07:02:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.161 07:02:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.161 07:02:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.161 07:02:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.161 07:02:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.161 07:02:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.161 07:02:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.161 07:02:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.161 07:02:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.161 07:02:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.161 07:02:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.161 07:02:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.161 07:02:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.161 07:02:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.161 07:02:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.161 07:02:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.161 07:02:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.161 07:02:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.161 07:02:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.161 07:02:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.161 07:02:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.161 07:02:02 -- paths/export.sh@5 -- # export PATH 00:21:55.161 07:02:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.161 07:02:02 -- nvmf/common.sh@46 -- # : 0 00:21:55.161 07:02:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:55.161 07:02:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:55.161 07:02:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:55.161 07:02:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.161 07:02:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.161 07:02:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:55.161 07:02:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:55.161 07:02:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:55.161 07:02:02 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:55.162 07:02:02 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:55.162 07:02:02 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:55.162 07:02:02 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:55.162 07:02:02 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:55.162 07:02:02 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:55.162 07:02:02 -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:55.162 07:02:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:55.162 07:02:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.162 07:02:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:55.162 07:02:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:55.162 07:02:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:55.162 07:02:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.162 07:02:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.162 07:02:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.162 07:02:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:55.162 07:02:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:55.162 07:02:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:55.162 07:02:02 -- common/autotest_common.sh@10 -- # set +x 00:21:57.063 07:02:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:57.063 07:02:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:57.063 07:02:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:57.063 07:02:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:57.063 07:02:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:57.063 07:02:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:57.063 07:02:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:57.063 07:02:03 -- nvmf/common.sh@294 -- # net_devs=() 00:21:57.063 07:02:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:57.063 07:02:03 -- nvmf/common.sh@295 -- # e810=() 00:21:57.063 07:02:03 -- nvmf/common.sh@295 -- # local -ga e810 00:21:57.063 07:02:03 -- nvmf/common.sh@296 -- # x722=() 00:21:57.063 07:02:03 -- nvmf/common.sh@296 -- # local -ga x722 00:21:57.063 07:02:03 -- nvmf/common.sh@297 -- # mlx=() 00:21:57.063 07:02:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:57.063 07:02:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.063 07:02:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.063 07:02:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.063 07:02:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.063 07:02:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.063 07:02:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.063 07:02:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.063 07:02:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.063 07:02:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.063 07:02:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.063 07:02:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.063 07:02:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:57.063 07:02:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:57.063 07:02:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:57.063 07:02:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:57.063 07:02:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:57.063 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:57.063 07:02:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:57.063 07:02:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:57.063 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:57.063 07:02:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:57.063 07:02:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:57.063 07:02:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.063 07:02:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:57.063 07:02:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.063 07:02:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:57.063 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:57.063 07:02:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.063 07:02:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:57.063 07:02:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.063 07:02:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:57.063 07:02:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.063 07:02:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:57.063 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:57.063 07:02:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.063 07:02:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:57.063 07:02:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:57.063 07:02:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:57.063 07:02:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:57.063 07:02:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.063 07:02:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.063 07:02:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.063 07:02:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:57.063 07:02:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.063 07:02:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.063 07:02:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:57.063 07:02:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.063 07:02:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.063 07:02:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:57.063 07:02:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:57.063 07:02:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.063 07:02:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.063 07:02:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.063 07:02:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.063 07:02:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:57.063 07:02:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.063 07:02:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.063 07:02:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.063 07:02:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:57.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:21:57.063 00:21:57.063 --- 10.0.0.2 ping statistics --- 00:21:57.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.063 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:21:57.063 07:02:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:21:57.064 00:21:57.064 --- 10.0.0.1 ping statistics --- 00:21:57.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.064 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:57.064 07:02:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.064 07:02:04 -- nvmf/common.sh@410 -- # return 0 00:21:57.064 07:02:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:57.064 07:02:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.064 07:02:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:57.064 07:02:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:57.064 07:02:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.064 07:02:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:57.064 07:02:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:57.064 07:02:04 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:57.064 07:02:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:57.064 07:02:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:57.064 07:02:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.064 07:02:04 -- nvmf/common.sh@469 -- # nvmfpid=3095416 00:21:57.064 07:02:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:57.064 07:02:04 -- nvmf/common.sh@470 -- # waitforlisten 3095416 00:21:57.064 07:02:04 -- common/autotest_common.sh@819 -- # '[' -z 3095416 ']' 00:21:57.064 07:02:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.064 07:02:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:57.064 07:02:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.064 07:02:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:57.064 07:02:04 -- common/autotest_common.sh@10 -- # set +x 00:21:57.064 [2024-05-12 07:02:04.170476] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:57.064 [2024-05-12 07:02:04.170540] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.321 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.321 [2024-05-12 07:02:04.238018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:57.321 [2024-05-12 07:02:04.351645] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:57.321 [2024-05-12 07:02:04.351793] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.321 [2024-05-12 07:02:04.351810] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.321 [2024-05-12 07:02:04.351823] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.321 [2024-05-12 07:02:04.355723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.321 [2024-05-12 07:02:04.355789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.321 [2024-05-12 07:02:04.355785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.252 07:02:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:58.252 07:02:05 -- common/autotest_common.sh@852 -- # return 0 00:21:58.252 07:02:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:58.252 07:02:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.252 07:02:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.252 07:02:05 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.252 07:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.252 [2024-05-12 07:02:05.212086] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.252 07:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.252 07:02:05 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:58.252 07:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.252 Malloc0 00:21:58.252 07:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.252 07:02:05 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.252 07:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.252 07:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.252 07:02:05 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:58.252 07:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.252 07:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.252 07:02:05 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.252 07:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.252 [2024-05-12 07:02:05.275017] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.252 07:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.252 07:02:05 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:58.252 07:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.252 [2024-05-12 07:02:05.282867] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:58.252 07:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.252 07:02:05 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:58.252 07:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.252 Malloc1 00:21:58.252 07:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.252 07:02:05 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:58.252 07:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.252 07:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.252 07:02:05 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:58.252 07:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.252 07:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.252 07:02:05 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:58.252 07:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.252 07:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.252 07:02:05 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:58.252 07:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:58.252 07:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.252 07:02:05 -- host/multicontroller.sh@44 -- # bdevperf_pid=3095571 00:21:58.252 07:02:05 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.252 07:02:05 -- host/multicontroller.sh@47 -- # waitforlisten 3095571 /var/tmp/bdevperf.sock 00:21:58.252 07:02:05 -- common/autotest_common.sh@819 -- # '[' -z 3095571 ']' 00:21:58.252 07:02:05 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:58.252 07:02:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.252 07:02:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:58.252 07:02:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.252 07:02:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:58.252 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.622 07:02:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:59.622 07:02:06 -- common/autotest_common.sh@852 -- # return 0 00:21:59.622 07:02:06 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:59.622 07:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.622 07:02:06 -- common/autotest_common.sh@10 -- # set +x 00:21:59.622 NVMe0n1 00:21:59.622 07:02:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.622 07:02:06 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:59.622 07:02:06 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:59.622 07:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.622 07:02:06 -- common/autotest_common.sh@10 -- # set +x 00:21:59.622 07:02:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.622 1 00:21:59.622 07:02:06 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:59.622 07:02:06 -- common/autotest_common.sh@640 -- # local es=0 00:21:59.622 07:02:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:59.622 07:02:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:59.622 07:02:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:59.622 07:02:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:59.622 07:02:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:59.622 07:02:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:59.622 07:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.622 07:02:06 -- common/autotest_common.sh@10 -- # set +x 00:21:59.622 request: 00:21:59.622 { 00:21:59.622 "name": "NVMe0", 00:21:59.622 "trtype": "tcp", 00:21:59.622 "traddr": "10.0.0.2", 00:21:59.622 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:59.622 "hostaddr": "10.0.0.2", 00:21:59.622 "hostsvcid": "60000", 00:21:59.622 "adrfam": "ipv4", 00:21:59.622 "trsvcid": "4420", 00:21:59.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.622 "method": "bdev_nvme_attach_controller", 00:21:59.622 "req_id": 1 00:21:59.622 } 00:21:59.622 Got JSON-RPC error response 00:21:59.622 response: 00:21:59.622 { 00:21:59.622 "code": -114, 00:21:59.622 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:59.622 } 00:21:59.622 07:02:06 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:59.622 07:02:06 -- common/autotest_common.sh@643 -- # es=1 00:21:59.622 07:02:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:59.622 07:02:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:59.622 07:02:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:59.622 07:02:06 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:59.622 07:02:06 -- common/autotest_common.sh@640 -- # local es=0 00:21:59.622 07:02:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:59.622 07:02:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:59.622 07:02:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:59.622 07:02:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:59.622 07:02:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:59.622 07:02:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:59.622 07:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.622 07:02:06 -- common/autotest_common.sh@10 -- # set +x 00:21:59.622 request: 00:21:59.622 { 00:21:59.622 "name": "NVMe0", 00:21:59.622 "trtype": "tcp", 00:21:59.622 "traddr": "10.0.0.2", 00:21:59.622 "hostaddr": "10.0.0.2", 00:21:59.622 "hostsvcid": "60000", 00:21:59.622 "adrfam": "ipv4", 00:21:59.622 "trsvcid": "4420", 00:21:59.622 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:59.622 "method": "bdev_nvme_attach_controller", 00:21:59.622 "req_id": 1 00:21:59.622 } 00:21:59.622 Got JSON-RPC error response 00:21:59.622 response: 00:21:59.622 { 00:21:59.622 "code": -114, 00:21:59.622 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:59.622 } 00:21:59.622 07:02:06 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:59.622 07:02:06 -- common/autotest_common.sh@643 -- # es=1 00:21:59.622 07:02:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:59.622 07:02:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:59.622 07:02:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:59.622 07:02:06 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:59.622 07:02:06 -- common/autotest_common.sh@640 -- # local es=0 00:21:59.622 07:02:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:59.622 07:02:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:59.622 07:02:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:59.622 07:02:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:59.622 07:02:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:59.622 07:02:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:59.622 07:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.622 07:02:06 -- common/autotest_common.sh@10 -- # set +x 00:21:59.622 request: 00:21:59.622 { 00:21:59.622 "name": "NVMe0", 00:21:59.622 "trtype": "tcp", 00:21:59.622 "traddr": "10.0.0.2", 00:21:59.622 "hostaddr": "10.0.0.2", 00:21:59.622 "hostsvcid": "60000", 00:21:59.622 "adrfam": "ipv4", 00:21:59.622 "trsvcid": "4420", 00:21:59.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.623 "multipath": "disable", 00:21:59.623 "method": "bdev_nvme_attach_controller", 00:21:59.623 "req_id": 1 00:21:59.623 } 00:21:59.623 Got JSON-RPC error response 00:21:59.623 response: 00:21:59.623 { 00:21:59.623 "code": -114, 00:21:59.623 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:59.623 } 00:21:59.623 07:02:06 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:59.623 07:02:06 -- common/autotest_common.sh@643 -- # es=1 00:21:59.623 07:02:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:59.623 07:02:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:59.623 07:02:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:59.623 07:02:06 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:59.623 07:02:06 -- common/autotest_common.sh@640 -- # local es=0 00:21:59.623 07:02:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:59.623 07:02:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:59.623 07:02:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:59.623 07:02:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:59.623 07:02:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:59.623 07:02:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:59.623 07:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.623 07:02:06 -- common/autotest_common.sh@10 -- # set +x 00:21:59.623 request: 00:21:59.623 { 00:21:59.623 "name": "NVMe0", 00:21:59.623 "trtype": "tcp", 00:21:59.623 "traddr": "10.0.0.2", 00:21:59.623 "hostaddr": "10.0.0.2", 00:21:59.623 "hostsvcid": "60000", 00:21:59.623 "adrfam": "ipv4", 00:21:59.623 "trsvcid": "4420", 00:21:59.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.623 "multipath": "failover", 00:21:59.623 "method": "bdev_nvme_attach_controller", 00:21:59.623 "req_id": 1 00:21:59.623 } 00:21:59.623 Got JSON-RPC error response 00:21:59.623 response: 00:21:59.623 { 00:21:59.623 "code": -114, 00:21:59.623 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:59.623 } 00:21:59.623 07:02:06 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:59.623 07:02:06 -- common/autotest_common.sh@643 -- # es=1 00:21:59.623 07:02:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:59.623 07:02:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:59.623 07:02:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:59.623 07:02:06 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:59.623 07:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.623 07:02:06 -- common/autotest_common.sh@10 -- # set +x 00:21:59.623 00:21:59.623 07:02:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.623 07:02:06 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:59.623 07:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.623 07:02:06 -- common/autotest_common.sh@10 -- # set +x 00:21:59.623 07:02:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.623 07:02:06 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:59.623 07:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.623 07:02:06 -- common/autotest_common.sh@10 -- # set +x 00:21:59.880 00:21:59.880 07:02:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.880 07:02:06 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:59.880 07:02:06 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:59.880 07:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.880 07:02:06 -- common/autotest_common.sh@10 -- # set +x 00:21:59.880 07:02:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.880 07:02:06 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:59.880 07:02:06 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.254 0 00:22:01.254 07:02:07 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:01.254 07:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.254 07:02:07 -- common/autotest_common.sh@10 -- # set +x 00:22:01.254 07:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.254 07:02:08 -- host/multicontroller.sh@100 -- # killprocess 3095571 00:22:01.254 07:02:08 -- common/autotest_common.sh@926 -- # '[' -z 3095571 ']' 00:22:01.254 07:02:08 -- common/autotest_common.sh@930 -- # kill -0 3095571 00:22:01.254 07:02:08 -- common/autotest_common.sh@931 -- # uname 00:22:01.254 07:02:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:01.254 07:02:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3095571 00:22:01.254 07:02:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:01.254 07:02:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:01.254 07:02:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3095571' 00:22:01.254 killing process with pid 3095571 00:22:01.254 07:02:08 -- common/autotest_common.sh@945 -- # kill 3095571 00:22:01.254 07:02:08 -- common/autotest_common.sh@950 -- # wait 3095571 00:22:01.254 07:02:08 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.254 07:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.254 07:02:08 -- common/autotest_common.sh@10 -- # set +x 00:22:01.254 07:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.254 07:02:08 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:01.254 07:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.254 07:02:08 -- common/autotest_common.sh@10 -- # set +x 00:22:01.254 07:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.254 07:02:08 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:01.254 07:02:08 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:01.254 07:02:08 -- common/autotest_common.sh@1597 -- # read -r file 00:22:01.254 07:02:08 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:01.254 07:02:08 -- common/autotest_common.sh@1596 -- # sort -u 00:22:01.254 07:02:08 -- common/autotest_common.sh@1598 -- # cat 00:22:01.254 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:01.254 [2024-05-12 07:02:05.382808] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:01.254 [2024-05-12 07:02:05.382889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095571 ] 00:22:01.254 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.254 [2024-05-12 07:02:05.442204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.254 [2024-05-12 07:02:05.548924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.254 [2024-05-12 07:02:06.838085] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 267e3ef8-f87a-46a8-92d8-8a4e90fe6f27 already exists 00:22:01.254 [2024-05-12 07:02:06.838125] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:267e3ef8-f87a-46a8-92d8-8a4e90fe6f27 alias for bdev NVMe1n1 00:22:01.254 [2024-05-12 07:02:06.838142] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:01.254 Running I/O for 1 seconds... 00:22:01.254 00:22:01.254 Latency(us) 00:22:01.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.254 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:01.254 NVMe0n1 : 1.01 15279.50 59.69 0.00 0.00 8364.62 7136.14 20194.80 00:22:01.254 =================================================================================================================== 00:22:01.254 Total : 15279.50 59.69 0.00 0.00 8364.62 7136.14 20194.80 00:22:01.254 Received shutdown signal, test time was about 1.000000 seconds 00:22:01.254 00:22:01.254 Latency(us) 00:22:01.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.254 =================================================================================================================== 00:22:01.254 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.254 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:01.254 07:02:08 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:01.254 07:02:08 -- common/autotest_common.sh@1597 -- # read -r file 00:22:01.254 07:02:08 -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:01.254 07:02:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:01.254 07:02:08 -- nvmf/common.sh@116 -- # sync 00:22:01.254 07:02:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:01.254 07:02:08 -- nvmf/common.sh@119 -- # set +e 00:22:01.254 07:02:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:01.254 07:02:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:01.254 rmmod nvme_tcp 00:22:01.254 rmmod nvme_fabrics 00:22:01.254 rmmod nvme_keyring 00:22:01.254 07:02:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:01.254 07:02:08 -- nvmf/common.sh@123 -- # set -e 00:22:01.254 07:02:08 -- nvmf/common.sh@124 -- # return 0 00:22:01.254 07:02:08 -- nvmf/common.sh@477 -- # '[' -n 3095416 ']' 00:22:01.254 07:02:08 -- nvmf/common.sh@478 -- # killprocess 3095416 00:22:01.254 07:02:08 -- common/autotest_common.sh@926 -- # '[' -z 3095416 ']' 00:22:01.254 07:02:08 -- common/autotest_common.sh@930 -- # kill -0 3095416 00:22:01.254 07:02:08 -- common/autotest_common.sh@931 -- # uname 00:22:01.254 07:02:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:01.254 07:02:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3095416 00:22:01.511 07:02:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:01.511 07:02:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:01.511 07:02:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3095416' 00:22:01.511 killing process with pid 3095416 00:22:01.511 07:02:08 -- common/autotest_common.sh@945 -- # kill 3095416 00:22:01.511 07:02:08 -- common/autotest_common.sh@950 -- # wait 3095416 00:22:01.771 07:02:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:01.771 07:02:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:01.771 07:02:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:01.771 07:02:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:01.771 07:02:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:01.771 07:02:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.771 07:02:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.771 07:02:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.673 07:02:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:03.673 00:22:03.673 real 0m8.697s 00:22:03.673 user 0m16.572s 00:22:03.673 sys 0m2.416s 00:22:03.673 07:02:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:03.673 07:02:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.673 ************************************ 00:22:03.673 END TEST nvmf_multicontroller 00:22:03.673 ************************************ 00:22:03.931 07:02:10 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:03.931 07:02:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:03.931 07:02:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:03.931 07:02:10 -- common/autotest_common.sh@10 -- # set +x 00:22:03.931 ************************************ 00:22:03.931 START TEST nvmf_aer 00:22:03.931 ************************************ 00:22:03.931 07:02:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:03.931 * Looking for test storage... 00:22:03.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:03.931 07:02:10 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.931 07:02:10 -- nvmf/common.sh@7 -- # uname -s 00:22:03.931 07:02:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.931 07:02:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.931 07:02:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.931 07:02:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.931 07:02:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.931 07:02:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.931 07:02:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.931 07:02:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.931 07:02:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.931 07:02:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.931 07:02:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.931 07:02:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.931 07:02:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.931 07:02:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.931 07:02:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.931 07:02:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.931 07:02:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.931 07:02:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.931 07:02:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.931 07:02:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.931 07:02:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.931 07:02:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.931 07:02:10 -- paths/export.sh@5 -- # export PATH 00:22:03.931 07:02:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.931 07:02:10 -- nvmf/common.sh@46 -- # : 0 00:22:03.931 07:02:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:03.931 07:02:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:03.931 07:02:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:03.931 07:02:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.931 07:02:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.931 07:02:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:03.931 07:02:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:03.931 07:02:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:03.931 07:02:10 -- host/aer.sh@11 -- # nvmftestinit 00:22:03.931 07:02:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:03.931 07:02:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.931 07:02:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:03.931 07:02:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:03.931 07:02:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:03.931 07:02:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.931 07:02:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.931 07:02:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.931 07:02:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:03.931 07:02:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:03.931 07:02:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:03.931 07:02:10 -- common/autotest_common.sh@10 -- # set +x 00:22:05.828 07:02:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:05.828 07:02:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:05.828 07:02:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:05.828 07:02:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:05.828 07:02:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:05.828 07:02:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:05.828 07:02:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:05.828 07:02:12 -- nvmf/common.sh@294 -- # net_devs=() 00:22:05.828 07:02:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:05.828 07:02:12 -- nvmf/common.sh@295 -- # e810=() 00:22:05.828 07:02:12 -- nvmf/common.sh@295 -- # local -ga e810 00:22:05.828 07:02:12 -- nvmf/common.sh@296 -- # x722=() 00:22:05.828 07:02:12 -- nvmf/common.sh@296 -- # local -ga x722 00:22:05.828 07:02:12 -- nvmf/common.sh@297 -- # mlx=() 00:22:05.828 07:02:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:05.828 07:02:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.828 07:02:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.828 07:02:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.828 07:02:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.828 07:02:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.828 07:02:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.828 07:02:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.828 07:02:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.828 07:02:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.828 07:02:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.828 07:02:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.828 07:02:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:05.828 07:02:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:05.828 07:02:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:05.828 07:02:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:05.828 07:02:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:05.828 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:05.828 07:02:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:05.828 07:02:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:05.828 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:05.828 07:02:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:05.828 07:02:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:05.828 07:02:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.828 07:02:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:05.828 07:02:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.828 07:02:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:05.828 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:05.828 07:02:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.828 07:02:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:05.828 07:02:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.828 07:02:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:05.828 07:02:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.828 07:02:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:05.828 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:05.828 07:02:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.828 07:02:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:05.828 07:02:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:05.828 07:02:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:05.828 07:02:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:05.829 07:02:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.829 07:02:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.829 07:02:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.829 07:02:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:05.829 07:02:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.829 07:02:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.829 07:02:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:05.829 07:02:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.829 07:02:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.829 07:02:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:05.829 07:02:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:05.829 07:02:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.829 07:02:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.829 07:02:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.829 07:02:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.829 07:02:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:05.829 07:02:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.829 07:02:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.829 07:02:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.829 07:02:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:05.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:22:05.829 00:22:05.829 --- 10.0.0.2 ping statistics --- 00:22:05.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.829 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:22:05.829 07:02:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:22:05.829 00:22:05.829 --- 10.0.0.1 ping statistics --- 00:22:05.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.829 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:22:05.829 07:02:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.829 07:02:12 -- nvmf/common.sh@410 -- # return 0 00:22:05.829 07:02:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:05.829 07:02:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.829 07:02:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:05.829 07:02:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:05.829 07:02:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.829 07:02:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:05.829 07:02:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:05.829 07:02:12 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:05.829 07:02:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:05.829 07:02:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:05.829 07:02:12 -- common/autotest_common.sh@10 -- # set +x 00:22:05.829 07:02:12 -- nvmf/common.sh@469 -- # nvmfpid=3097817 00:22:05.829 07:02:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:05.829 07:02:12 -- nvmf/common.sh@470 -- # waitforlisten 3097817 00:22:05.829 07:02:12 -- common/autotest_common.sh@819 -- # '[' -z 3097817 ']' 00:22:05.829 07:02:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.829 07:02:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:05.829 07:02:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.829 07:02:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:05.829 07:02:12 -- common/autotest_common.sh@10 -- # set +x 00:22:05.829 [2024-05-12 07:02:12.863376] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:05.829 [2024-05-12 07:02:12.863457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.829 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.829 [2024-05-12 07:02:12.931585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.091 [2024-05-12 07:02:13.039426] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:06.091 [2024-05-12 07:02:13.039587] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.091 [2024-05-12 07:02:13.039604] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.091 [2024-05-12 07:02:13.039616] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.091 [2024-05-12 07:02:13.039678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.091 [2024-05-12 07:02:13.039751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.091 [2024-05-12 07:02:13.039783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.091 [2024-05-12 07:02:13.039786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.024 07:02:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:07.024 07:02:13 -- common/autotest_common.sh@852 -- # return 0 00:22:07.024 07:02:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:07.024 07:02:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:07.024 07:02:13 -- common/autotest_common.sh@10 -- # set +x 00:22:07.024 07:02:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.024 07:02:13 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:07.024 07:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.024 07:02:13 -- common/autotest_common.sh@10 -- # set +x 00:22:07.024 [2024-05-12 07:02:13.870340] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.024 07:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.024 07:02:13 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:07.024 07:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.024 07:02:13 -- common/autotest_common.sh@10 -- # set +x 00:22:07.024 Malloc0 00:22:07.024 07:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.024 07:02:13 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:07.024 07:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.024 07:02:13 -- common/autotest_common.sh@10 -- # set +x 00:22:07.024 07:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.024 07:02:13 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.024 07:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.024 07:02:13 -- common/autotest_common.sh@10 -- # set +x 00:22:07.024 07:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.024 07:02:13 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.024 07:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.024 07:02:13 -- common/autotest_common.sh@10 -- # set +x 00:22:07.024 [2024-05-12 07:02:13.923176] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.024 07:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.024 07:02:13 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:07.024 07:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.024 07:02:13 -- common/autotest_common.sh@10 -- # set +x 00:22:07.024 [2024-05-12 07:02:13.930905] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:07.024 [ 00:22:07.024 { 00:22:07.024 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:07.024 "subtype": "Discovery", 00:22:07.024 "listen_addresses": [], 00:22:07.024 "allow_any_host": true, 00:22:07.024 "hosts": [] 00:22:07.024 }, 00:22:07.024 { 00:22:07.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.024 "subtype": "NVMe", 00:22:07.024 "listen_addresses": [ 00:22:07.024 { 00:22:07.024 "transport": "TCP", 00:22:07.024 "trtype": "TCP", 00:22:07.024 "adrfam": "IPv4", 00:22:07.024 "traddr": "10.0.0.2", 00:22:07.024 "trsvcid": "4420" 00:22:07.024 } 00:22:07.024 ], 00:22:07.024 "allow_any_host": true, 00:22:07.024 "hosts": [], 00:22:07.024 "serial_number": "SPDK00000000000001", 00:22:07.024 "model_number": "SPDK bdev Controller", 00:22:07.024 "max_namespaces": 2, 00:22:07.024 "min_cntlid": 1, 00:22:07.024 "max_cntlid": 65519, 00:22:07.024 "namespaces": [ 00:22:07.024 { 00:22:07.024 "nsid": 1, 00:22:07.024 "bdev_name": "Malloc0", 00:22:07.024 "name": "Malloc0", 00:22:07.024 "nguid": "B237F44E7695457B9E257B0F3BB6354E", 00:22:07.024 "uuid": "b237f44e-7695-457b-9e25-7b0f3bb6354e" 00:22:07.024 } 00:22:07.024 ] 00:22:07.024 } 00:22:07.024 ] 00:22:07.024 07:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.024 07:02:13 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:07.024 07:02:13 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:07.024 07:02:13 -- host/aer.sh@33 -- # aerpid=3097979 00:22:07.024 07:02:13 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:07.024 07:02:13 -- common/autotest_common.sh@1244 -- # local i=0 00:22:07.024 07:02:13 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:07.025 07:02:13 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:07.025 07:02:13 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:22:07.025 07:02:13 -- common/autotest_common.sh@1247 -- # i=1 00:22:07.025 07:02:13 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:07.025 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.025 07:02:14 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:07.025 07:02:14 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:22:07.025 07:02:14 -- common/autotest_common.sh@1247 -- # i=2 00:22:07.025 07:02:14 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:07.025 07:02:14 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:07.025 07:02:14 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:07.025 07:02:14 -- common/autotest_common.sh@1255 -- # return 0 00:22:07.025 07:02:14 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:07.025 07:02:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.025 07:02:14 -- common/autotest_common.sh@10 -- # set +x 00:22:07.286 Malloc1 00:22:07.286 07:02:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.286 07:02:14 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:07.286 07:02:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.286 07:02:14 -- common/autotest_common.sh@10 -- # set +x 00:22:07.286 07:02:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.286 07:02:14 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:07.286 07:02:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.286 07:02:14 -- common/autotest_common.sh@10 -- # set +x 00:22:07.286 Asynchronous Event Request test 00:22:07.286 Attaching to 10.0.0.2 00:22:07.286 Attached to 10.0.0.2 00:22:07.286 Registering asynchronous event callbacks... 00:22:07.286 Starting namespace attribute notice tests for all controllers... 00:22:07.286 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:07.286 aer_cb - Changed Namespace 00:22:07.286 Cleaning up... 00:22:07.286 [ 00:22:07.286 { 00:22:07.286 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:07.286 "subtype": "Discovery", 00:22:07.286 "listen_addresses": [], 00:22:07.286 "allow_any_host": true, 00:22:07.286 "hosts": [] 00:22:07.286 }, 00:22:07.286 { 00:22:07.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.286 "subtype": "NVMe", 00:22:07.286 "listen_addresses": [ 00:22:07.286 { 00:22:07.286 "transport": "TCP", 00:22:07.286 "trtype": "TCP", 00:22:07.286 "adrfam": "IPv4", 00:22:07.286 "traddr": "10.0.0.2", 00:22:07.286 "trsvcid": "4420" 00:22:07.286 } 00:22:07.286 ], 00:22:07.286 "allow_any_host": true, 00:22:07.286 "hosts": [], 00:22:07.286 "serial_number": "SPDK00000000000001", 00:22:07.286 "model_number": "SPDK bdev Controller", 00:22:07.286 "max_namespaces": 2, 00:22:07.286 "min_cntlid": 1, 00:22:07.286 "max_cntlid": 65519, 00:22:07.286 "namespaces": [ 00:22:07.286 { 00:22:07.286 "nsid": 1, 00:22:07.286 "bdev_name": "Malloc0", 00:22:07.286 "name": "Malloc0", 00:22:07.286 "nguid": "B237F44E7695457B9E257B0F3BB6354E", 00:22:07.286 "uuid": "b237f44e-7695-457b-9e25-7b0f3bb6354e" 00:22:07.286 }, 00:22:07.286 { 00:22:07.286 "nsid": 2, 00:22:07.286 "bdev_name": "Malloc1", 00:22:07.286 "name": "Malloc1", 00:22:07.286 "nguid": "3209D18AAF4249B8BDB5985CD4ABC9A8", 00:22:07.286 "uuid": "3209d18a-af42-49b8-bdb5-985cd4abc9a8" 00:22:07.286 } 00:22:07.286 ] 00:22:07.286 } 00:22:07.286 ] 00:22:07.286 07:02:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.286 07:02:14 -- host/aer.sh@43 -- # wait 3097979 00:22:07.286 07:02:14 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:07.286 07:02:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.286 07:02:14 -- common/autotest_common.sh@10 -- # set +x 00:22:07.286 07:02:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.286 07:02:14 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:07.286 07:02:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.286 07:02:14 -- common/autotest_common.sh@10 -- # set +x 00:22:07.286 07:02:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.286 07:02:14 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.286 07:02:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.286 07:02:14 -- common/autotest_common.sh@10 -- # set +x 00:22:07.286 07:02:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.286 07:02:14 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:07.286 07:02:14 -- host/aer.sh@51 -- # nvmftestfini 00:22:07.286 07:02:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:07.286 07:02:14 -- nvmf/common.sh@116 -- # sync 00:22:07.286 07:02:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:07.286 07:02:14 -- nvmf/common.sh@119 -- # set +e 00:22:07.286 07:02:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:07.286 07:02:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:07.286 rmmod nvme_tcp 00:22:07.286 rmmod nvme_fabrics 00:22:07.286 rmmod nvme_keyring 00:22:07.286 07:02:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:07.286 07:02:14 -- nvmf/common.sh@123 -- # set -e 00:22:07.286 07:02:14 -- nvmf/common.sh@124 -- # return 0 00:22:07.286 07:02:14 -- nvmf/common.sh@477 -- # '[' -n 3097817 ']' 00:22:07.286 07:02:14 -- nvmf/common.sh@478 -- # killprocess 3097817 00:22:07.286 07:02:14 -- common/autotest_common.sh@926 -- # '[' -z 3097817 ']' 00:22:07.286 07:02:14 -- common/autotest_common.sh@930 -- # kill -0 3097817 00:22:07.286 07:02:14 -- common/autotest_common.sh@931 -- # uname 00:22:07.286 07:02:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:07.286 07:02:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3097817 00:22:07.286 07:02:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:07.286 07:02:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:07.286 07:02:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3097817' 00:22:07.286 killing process with pid 3097817 00:22:07.286 07:02:14 -- common/autotest_common.sh@945 -- # kill 3097817 00:22:07.286 [2024-05-12 07:02:14.373328] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:07.286 07:02:14 -- common/autotest_common.sh@950 -- # wait 3097817 00:22:07.545 07:02:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:07.545 07:02:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:07.545 07:02:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:07.545 07:02:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:07.545 07:02:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:07.545 07:02:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.545 07:02:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.545 07:02:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.072 07:02:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:10.072 00:22:10.072 real 0m5.863s 00:22:10.072 user 0m7.003s 00:22:10.072 sys 0m1.822s 00:22:10.072 07:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:10.072 07:02:16 -- common/autotest_common.sh@10 -- # set +x 00:22:10.072 ************************************ 00:22:10.072 END TEST nvmf_aer 00:22:10.072 ************************************ 00:22:10.072 07:02:16 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:10.072 07:02:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:10.072 07:02:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:10.072 07:02:16 -- common/autotest_common.sh@10 -- # set +x 00:22:10.072 ************************************ 00:22:10.072 START TEST nvmf_async_init 00:22:10.072 ************************************ 00:22:10.072 07:02:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:10.072 * Looking for test storage... 00:22:10.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:10.072 07:02:16 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.072 07:02:16 -- nvmf/common.sh@7 -- # uname -s 00:22:10.072 07:02:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.072 07:02:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.072 07:02:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.072 07:02:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.072 07:02:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.072 07:02:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.072 07:02:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.072 07:02:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.072 07:02:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.072 07:02:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.072 07:02:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.072 07:02:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.072 07:02:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.072 07:02:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.072 07:02:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.072 07:02:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.072 07:02:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.072 07:02:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.072 07:02:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.072 07:02:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.072 07:02:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.072 07:02:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.072 07:02:16 -- paths/export.sh@5 -- # export PATH 00:22:10.072 07:02:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.072 07:02:16 -- nvmf/common.sh@46 -- # : 0 00:22:10.072 07:02:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:10.072 07:02:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:10.072 07:02:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:10.072 07:02:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.072 07:02:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.072 07:02:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:10.072 07:02:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:10.072 07:02:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:10.072 07:02:16 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:10.072 07:02:16 -- host/async_init.sh@14 -- # null_block_size=512 00:22:10.072 07:02:16 -- host/async_init.sh@15 -- # null_bdev=null0 00:22:10.072 07:02:16 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:10.072 07:02:16 -- host/async_init.sh@20 -- # uuidgen 00:22:10.072 07:02:16 -- host/async_init.sh@20 -- # tr -d - 00:22:10.072 07:02:16 -- host/async_init.sh@20 -- # nguid=0a335d94055348a8b88419b2ec7f02c0 00:22:10.072 07:02:16 -- host/async_init.sh@22 -- # nvmftestinit 00:22:10.072 07:02:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:10.072 07:02:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.072 07:02:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:10.072 07:02:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:10.072 07:02:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:10.072 07:02:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.072 07:02:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.072 07:02:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.072 07:02:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:10.072 07:02:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:10.072 07:02:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:10.072 07:02:16 -- common/autotest_common.sh@10 -- # set +x 00:22:11.972 07:02:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:11.972 07:02:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:11.972 07:02:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:11.972 07:02:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:11.972 07:02:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:11.972 07:02:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:11.972 07:02:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:11.972 07:02:18 -- nvmf/common.sh@294 -- # net_devs=() 00:22:11.972 07:02:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:11.972 07:02:18 -- nvmf/common.sh@295 -- # e810=() 00:22:11.972 07:02:18 -- nvmf/common.sh@295 -- # local -ga e810 00:22:11.972 07:02:18 -- nvmf/common.sh@296 -- # x722=() 00:22:11.972 07:02:18 -- nvmf/common.sh@296 -- # local -ga x722 00:22:11.972 07:02:18 -- nvmf/common.sh@297 -- # mlx=() 00:22:11.972 07:02:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:11.972 07:02:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.972 07:02:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.972 07:02:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.972 07:02:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.972 07:02:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.972 07:02:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.972 07:02:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.972 07:02:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.972 07:02:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.972 07:02:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.972 07:02:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.972 07:02:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:11.972 07:02:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:11.972 07:02:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:11.972 07:02:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:11.972 07:02:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:11.972 07:02:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:11.972 07:02:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:11.972 07:02:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:11.972 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:11.972 07:02:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:11.972 07:02:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:11.972 07:02:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.972 07:02:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.972 07:02:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:11.972 07:02:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:11.972 07:02:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:11.972 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:11.972 07:02:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:11.972 07:02:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:11.973 07:02:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.973 07:02:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.973 07:02:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:11.973 07:02:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:11.973 07:02:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:11.973 07:02:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:11.973 07:02:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:11.973 07:02:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.973 07:02:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:11.973 07:02:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.973 07:02:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:11.973 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:11.973 07:02:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.973 07:02:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:11.973 07:02:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.973 07:02:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:11.973 07:02:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.973 07:02:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:11.973 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:11.973 07:02:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.973 07:02:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:11.973 07:02:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:11.973 07:02:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:11.973 07:02:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:11.973 07:02:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:11.973 07:02:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.973 07:02:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.973 07:02:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.973 07:02:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:11.973 07:02:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.973 07:02:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.973 07:02:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:11.973 07:02:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.973 07:02:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.973 07:02:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:11.973 07:02:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:11.973 07:02:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.973 07:02:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.973 07:02:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.973 07:02:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.973 07:02:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:11.973 07:02:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.973 07:02:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.973 07:02:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.973 07:02:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:11.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:22:11.973 00:22:11.973 --- 10.0.0.2 ping statistics --- 00:22:11.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.973 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:22:11.973 07:02:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:22:11.973 00:22:11.973 --- 10.0.0.1 ping statistics --- 00:22:11.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.973 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:22:11.973 07:02:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.973 07:02:18 -- nvmf/common.sh@410 -- # return 0 00:22:11.973 07:02:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:11.973 07:02:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.973 07:02:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:11.973 07:02:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:11.973 07:02:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.973 07:02:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:11.973 07:02:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:11.973 07:02:18 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:11.973 07:02:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:11.973 07:02:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:11.973 07:02:18 -- common/autotest_common.sh@10 -- # set +x 00:22:11.973 07:02:18 -- nvmf/common.sh@469 -- # nvmfpid=3100043 00:22:11.973 07:02:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:11.973 07:02:18 -- nvmf/common.sh@470 -- # waitforlisten 3100043 00:22:11.973 07:02:18 -- common/autotest_common.sh@819 -- # '[' -z 3100043 ']' 00:22:11.973 07:02:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.973 07:02:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:11.973 07:02:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.973 07:02:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:11.973 07:02:18 -- common/autotest_common.sh@10 -- # set +x 00:22:11.973 [2024-05-12 07:02:18.846006] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:11.973 [2024-05-12 07:02:18.846074] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.973 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.973 [2024-05-12 07:02:18.912327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.973 [2024-05-12 07:02:19.032165] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:11.973 [2024-05-12 07:02:19.032347] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.973 [2024-05-12 07:02:19.032365] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.973 [2024-05-12 07:02:19.032378] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.973 [2024-05-12 07:02:19.032423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.907 07:02:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:12.907 07:02:19 -- common/autotest_common.sh@852 -- # return 0 00:22:12.907 07:02:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:12.907 07:02:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:12.907 07:02:19 -- common/autotest_common.sh@10 -- # set +x 00:22:12.907 07:02:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.907 07:02:19 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:12.907 07:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.907 07:02:19 -- common/autotest_common.sh@10 -- # set +x 00:22:12.907 [2024-05-12 07:02:19.883322] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.907 07:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.907 07:02:19 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:12.907 07:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.907 07:02:19 -- common/autotest_common.sh@10 -- # set +x 00:22:12.907 null0 00:22:12.907 07:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.907 07:02:19 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:12.907 07:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.907 07:02:19 -- common/autotest_common.sh@10 -- # set +x 00:22:12.907 07:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.907 07:02:19 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:12.907 07:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.907 07:02:19 -- common/autotest_common.sh@10 -- # set +x 00:22:12.907 07:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.907 07:02:19 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0a335d94055348a8b88419b2ec7f02c0 00:22:12.907 07:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.907 07:02:19 -- common/autotest_common.sh@10 -- # set +x 00:22:12.907 07:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.907 07:02:19 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:12.907 07:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.907 07:02:19 -- common/autotest_common.sh@10 -- # set +x 00:22:12.907 [2024-05-12 07:02:19.923552] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.907 07:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.907 07:02:19 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:12.907 07:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.907 07:02:19 -- common/autotest_common.sh@10 -- # set +x 00:22:13.165 nvme0n1 00:22:13.165 07:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.165 07:02:20 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:13.165 07:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.165 07:02:20 -- common/autotest_common.sh@10 -- # set +x 00:22:13.165 [ 00:22:13.165 { 00:22:13.165 "name": "nvme0n1", 00:22:13.165 "aliases": [ 00:22:13.165 "0a335d94-0553-48a8-b884-19b2ec7f02c0" 00:22:13.165 ], 00:22:13.165 "product_name": "NVMe disk", 00:22:13.165 "block_size": 512, 00:22:13.165 "num_blocks": 2097152, 00:22:13.165 "uuid": "0a335d94-0553-48a8-b884-19b2ec7f02c0", 00:22:13.165 "assigned_rate_limits": { 00:22:13.165 "rw_ios_per_sec": 0, 00:22:13.165 "rw_mbytes_per_sec": 0, 00:22:13.165 "r_mbytes_per_sec": 0, 00:22:13.165 "w_mbytes_per_sec": 0 00:22:13.165 }, 00:22:13.165 "claimed": false, 00:22:13.165 "zoned": false, 00:22:13.165 "supported_io_types": { 00:22:13.165 "read": true, 00:22:13.165 "write": true, 00:22:13.165 "unmap": false, 00:22:13.166 "write_zeroes": true, 00:22:13.166 "flush": true, 00:22:13.166 "reset": true, 00:22:13.166 "compare": true, 00:22:13.166 "compare_and_write": true, 00:22:13.166 "abort": true, 00:22:13.166 "nvme_admin": true, 00:22:13.166 "nvme_io": true 00:22:13.166 }, 00:22:13.166 "driver_specific": { 00:22:13.166 "nvme": [ 00:22:13.166 { 00:22:13.166 "trid": { 00:22:13.166 "trtype": "TCP", 00:22:13.166 "adrfam": "IPv4", 00:22:13.166 "traddr": "10.0.0.2", 00:22:13.166 "trsvcid": "4420", 00:22:13.166 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:13.166 }, 00:22:13.166 "ctrlr_data": { 00:22:13.166 "cntlid": 1, 00:22:13.166 "vendor_id": "0x8086", 00:22:13.166 "model_number": "SPDK bdev Controller", 00:22:13.166 "serial_number": "00000000000000000000", 00:22:13.166 "firmware_revision": "24.01.1", 00:22:13.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.166 "oacs": { 00:22:13.166 "security": 0, 00:22:13.166 "format": 0, 00:22:13.166 "firmware": 0, 00:22:13.166 "ns_manage": 0 00:22:13.166 }, 00:22:13.166 "multi_ctrlr": true, 00:22:13.166 "ana_reporting": false 00:22:13.166 }, 00:22:13.166 "vs": { 00:22:13.166 "nvme_version": "1.3" 00:22:13.166 }, 00:22:13.166 "ns_data": { 00:22:13.166 "id": 1, 00:22:13.166 "can_share": true 00:22:13.166 } 00:22:13.166 } 00:22:13.166 ], 00:22:13.166 "mp_policy": "active_passive" 00:22:13.166 } 00:22:13.166 } 00:22:13.166 ] 00:22:13.166 07:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.166 07:02:20 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:13.166 07:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.166 07:02:20 -- common/autotest_common.sh@10 -- # set +x 00:22:13.166 [2024-05-12 07:02:20.172107] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.166 [2024-05-12 07:02:20.172209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa669d0 (9): Bad file descriptor 00:22:13.425 [2024-05-12 07:02:20.303862] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:13.425 07:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.425 07:02:20 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:13.425 07:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.425 07:02:20 -- common/autotest_common.sh@10 -- # set +x 00:22:13.425 [ 00:22:13.425 { 00:22:13.425 "name": "nvme0n1", 00:22:13.425 "aliases": [ 00:22:13.425 "0a335d94-0553-48a8-b884-19b2ec7f02c0" 00:22:13.425 ], 00:22:13.425 "product_name": "NVMe disk", 00:22:13.425 "block_size": 512, 00:22:13.425 "num_blocks": 2097152, 00:22:13.425 "uuid": "0a335d94-0553-48a8-b884-19b2ec7f02c0", 00:22:13.425 "assigned_rate_limits": { 00:22:13.425 "rw_ios_per_sec": 0, 00:22:13.425 "rw_mbytes_per_sec": 0, 00:22:13.425 "r_mbytes_per_sec": 0, 00:22:13.425 "w_mbytes_per_sec": 0 00:22:13.425 }, 00:22:13.425 "claimed": false, 00:22:13.425 "zoned": false, 00:22:13.425 "supported_io_types": { 00:22:13.425 "read": true, 00:22:13.425 "write": true, 00:22:13.425 "unmap": false, 00:22:13.425 "write_zeroes": true, 00:22:13.425 "flush": true, 00:22:13.425 "reset": true, 00:22:13.425 "compare": true, 00:22:13.425 "compare_and_write": true, 00:22:13.425 "abort": true, 00:22:13.425 "nvme_admin": true, 00:22:13.425 "nvme_io": true 00:22:13.426 }, 00:22:13.426 "driver_specific": { 00:22:13.426 "nvme": [ 00:22:13.426 { 00:22:13.426 "trid": { 00:22:13.426 "trtype": "TCP", 00:22:13.426 "adrfam": "IPv4", 00:22:13.426 "traddr": "10.0.0.2", 00:22:13.426 "trsvcid": "4420", 00:22:13.426 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:13.426 }, 00:22:13.426 "ctrlr_data": { 00:22:13.426 "cntlid": 2, 00:22:13.426 "vendor_id": "0x8086", 00:22:13.426 "model_number": "SPDK bdev Controller", 00:22:13.426 "serial_number": "00000000000000000000", 00:22:13.426 "firmware_revision": "24.01.1", 00:22:13.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.426 "oacs": { 00:22:13.426 "security": 0, 00:22:13.426 "format": 0, 00:22:13.426 "firmware": 0, 00:22:13.426 "ns_manage": 0 00:22:13.426 }, 00:22:13.426 "multi_ctrlr": true, 00:22:13.426 "ana_reporting": false 00:22:13.426 }, 00:22:13.426 "vs": { 00:22:13.426 "nvme_version": "1.3" 00:22:13.426 }, 00:22:13.426 "ns_data": { 00:22:13.426 "id": 1, 00:22:13.426 "can_share": true 00:22:13.426 } 00:22:13.426 } 00:22:13.426 ], 00:22:13.426 "mp_policy": "active_passive" 00:22:13.426 } 00:22:13.426 } 00:22:13.426 ] 00:22:13.426 07:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.426 07:02:20 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.426 07:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.426 07:02:20 -- common/autotest_common.sh@10 -- # set +x 00:22:13.426 07:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.426 07:02:20 -- host/async_init.sh@53 -- # mktemp 00:22:13.426 07:02:20 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.C8FSmQnFRl 00:22:13.426 07:02:20 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:13.426 07:02:20 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.C8FSmQnFRl 00:22:13.426 07:02:20 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:13.426 07:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.426 07:02:20 -- common/autotest_common.sh@10 -- # set +x 00:22:13.426 07:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.426 07:02:20 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:13.426 07:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.426 07:02:20 -- common/autotest_common.sh@10 -- # set +x 00:22:13.426 [2024-05-12 07:02:20.348714] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:13.426 [2024-05-12 07:02:20.348890] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:13.426 07:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.426 07:02:20 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C8FSmQnFRl 00:22:13.426 07:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.426 07:02:20 -- common/autotest_common.sh@10 -- # set +x 00:22:13.426 07:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.426 07:02:20 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C8FSmQnFRl 00:22:13.426 07:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.426 07:02:20 -- common/autotest_common.sh@10 -- # set +x 00:22:13.426 [2024-05-12 07:02:20.364758] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.426 nvme0n1 00:22:13.426 07:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.426 07:02:20 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:13.426 07:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.426 07:02:20 -- common/autotest_common.sh@10 -- # set +x 00:22:13.426 [ 00:22:13.426 { 00:22:13.426 "name": "nvme0n1", 00:22:13.426 "aliases": [ 00:22:13.426 "0a335d94-0553-48a8-b884-19b2ec7f02c0" 00:22:13.426 ], 00:22:13.426 "product_name": "NVMe disk", 00:22:13.426 "block_size": 512, 00:22:13.426 "num_blocks": 2097152, 00:22:13.426 "uuid": "0a335d94-0553-48a8-b884-19b2ec7f02c0", 00:22:13.426 "assigned_rate_limits": { 00:22:13.426 "rw_ios_per_sec": 0, 00:22:13.426 "rw_mbytes_per_sec": 0, 00:22:13.426 "r_mbytes_per_sec": 0, 00:22:13.426 "w_mbytes_per_sec": 0 00:22:13.426 }, 00:22:13.426 "claimed": false, 00:22:13.426 "zoned": false, 00:22:13.426 "supported_io_types": { 00:22:13.426 "read": true, 00:22:13.426 "write": true, 00:22:13.426 "unmap": false, 00:22:13.426 "write_zeroes": true, 00:22:13.426 "flush": true, 00:22:13.426 "reset": true, 00:22:13.426 "compare": true, 00:22:13.426 "compare_and_write": true, 00:22:13.426 "abort": true, 00:22:13.426 "nvme_admin": true, 00:22:13.426 "nvme_io": true 00:22:13.426 }, 00:22:13.426 "driver_specific": { 00:22:13.426 "nvme": [ 00:22:13.426 { 00:22:13.426 "trid": { 00:22:13.426 "trtype": "TCP", 00:22:13.426 "adrfam": "IPv4", 00:22:13.426 "traddr": "10.0.0.2", 00:22:13.426 "trsvcid": "4421", 00:22:13.426 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:13.426 }, 00:22:13.426 "ctrlr_data": { 00:22:13.426 "cntlid": 3, 00:22:13.426 "vendor_id": "0x8086", 00:22:13.426 "model_number": "SPDK bdev Controller", 00:22:13.426 "serial_number": "00000000000000000000", 00:22:13.426 "firmware_revision": "24.01.1", 00:22:13.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.426 "oacs": { 00:22:13.426 "security": 0, 00:22:13.426 "format": 0, 00:22:13.426 "firmware": 0, 00:22:13.426 "ns_manage": 0 00:22:13.426 }, 00:22:13.426 "multi_ctrlr": true, 00:22:13.426 "ana_reporting": false 00:22:13.426 }, 00:22:13.426 "vs": { 00:22:13.426 "nvme_version": "1.3" 00:22:13.426 }, 00:22:13.426 "ns_data": { 00:22:13.426 "id": 1, 00:22:13.426 "can_share": true 00:22:13.426 } 00:22:13.426 } 00:22:13.426 ], 00:22:13.426 "mp_policy": "active_passive" 00:22:13.426 } 00:22:13.426 } 00:22:13.426 ] 00:22:13.426 07:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.426 07:02:20 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.426 07:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.426 07:02:20 -- common/autotest_common.sh@10 -- # set +x 00:22:13.426 07:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.426 07:02:20 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.C8FSmQnFRl 00:22:13.426 07:02:20 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:13.426 07:02:20 -- host/async_init.sh@78 -- # nvmftestfini 00:22:13.426 07:02:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:13.426 07:02:20 -- nvmf/common.sh@116 -- # sync 00:22:13.426 07:02:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:13.426 07:02:20 -- nvmf/common.sh@119 -- # set +e 00:22:13.426 07:02:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:13.426 07:02:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:13.426 rmmod nvme_tcp 00:22:13.426 rmmod nvme_fabrics 00:22:13.426 rmmod nvme_keyring 00:22:13.426 07:02:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:13.426 07:02:20 -- nvmf/common.sh@123 -- # set -e 00:22:13.426 07:02:20 -- nvmf/common.sh@124 -- # return 0 00:22:13.426 07:02:20 -- nvmf/common.sh@477 -- # '[' -n 3100043 ']' 00:22:13.426 07:02:20 -- nvmf/common.sh@478 -- # killprocess 3100043 00:22:13.426 07:02:20 -- common/autotest_common.sh@926 -- # '[' -z 3100043 ']' 00:22:13.426 07:02:20 -- common/autotest_common.sh@930 -- # kill -0 3100043 00:22:13.426 07:02:20 -- common/autotest_common.sh@931 -- # uname 00:22:13.426 07:02:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:13.426 07:02:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3100043 00:22:13.426 07:02:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:13.426 07:02:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:13.426 07:02:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3100043' 00:22:13.426 killing process with pid 3100043 00:22:13.426 07:02:20 -- common/autotest_common.sh@945 -- # kill 3100043 00:22:13.426 07:02:20 -- common/autotest_common.sh@950 -- # wait 3100043 00:22:13.685 07:02:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:13.685 07:02:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:13.685 07:02:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:13.685 07:02:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.685 07:02:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:13.685 07:02:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.685 07:02:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.685 07:02:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.217 07:02:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:16.217 00:22:16.217 real 0m6.124s 00:22:16.217 user 0m2.998s 00:22:16.217 sys 0m1.777s 00:22:16.217 07:02:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.217 07:02:22 -- common/autotest_common.sh@10 -- # set +x 00:22:16.217 ************************************ 00:22:16.217 END TEST nvmf_async_init 00:22:16.217 ************************************ 00:22:16.217 07:02:22 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:16.217 07:02:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:16.217 07:02:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:16.217 07:02:22 -- common/autotest_common.sh@10 -- # set +x 00:22:16.217 ************************************ 00:22:16.217 START TEST dma 00:22:16.217 ************************************ 00:22:16.217 07:02:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:16.217 * Looking for test storage... 00:22:16.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:16.217 07:02:22 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.217 07:02:22 -- nvmf/common.sh@7 -- # uname -s 00:22:16.217 07:02:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.217 07:02:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.217 07:02:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.217 07:02:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.217 07:02:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.217 07:02:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.217 07:02:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.217 07:02:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.217 07:02:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.217 07:02:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.217 07:02:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.217 07:02:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.217 07:02:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.217 07:02:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.217 07:02:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.217 07:02:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.217 07:02:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.217 07:02:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.217 07:02:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.217 07:02:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.217 07:02:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.217 07:02:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.218 07:02:22 -- paths/export.sh@5 -- # export PATH 00:22:16.218 07:02:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.218 07:02:22 -- nvmf/common.sh@46 -- # : 0 00:22:16.218 07:02:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:16.218 07:02:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:16.218 07:02:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:16.218 07:02:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.218 07:02:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.218 07:02:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:16.218 07:02:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:16.218 07:02:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:16.218 07:02:22 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:16.218 07:02:22 -- host/dma.sh@13 -- # exit 0 00:22:16.218 00:22:16.218 real 0m0.068s 00:22:16.218 user 0m0.025s 00:22:16.218 sys 0m0.048s 00:22:16.218 07:02:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.218 07:02:22 -- common/autotest_common.sh@10 -- # set +x 00:22:16.218 ************************************ 00:22:16.218 END TEST dma 00:22:16.218 ************************************ 00:22:16.218 07:02:22 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:16.218 07:02:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:16.218 07:02:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:16.218 07:02:22 -- common/autotest_common.sh@10 -- # set +x 00:22:16.218 ************************************ 00:22:16.218 START TEST nvmf_identify 00:22:16.218 ************************************ 00:22:16.218 07:02:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:16.218 * Looking for test storage... 00:22:16.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:16.218 07:02:22 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.218 07:02:22 -- nvmf/common.sh@7 -- # uname -s 00:22:16.218 07:02:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.218 07:02:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.218 07:02:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.218 07:02:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.218 07:02:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.218 07:02:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.218 07:02:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.218 07:02:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.218 07:02:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.218 07:02:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.218 07:02:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.218 07:02:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.218 07:02:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.218 07:02:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.218 07:02:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.218 07:02:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.218 07:02:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.218 07:02:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.218 07:02:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.218 07:02:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.218 07:02:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.218 07:02:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.218 07:02:22 -- paths/export.sh@5 -- # export PATH 00:22:16.218 07:02:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.218 07:02:22 -- nvmf/common.sh@46 -- # : 0 00:22:16.218 07:02:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:16.218 07:02:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:16.218 07:02:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:16.218 07:02:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.218 07:02:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.218 07:02:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:16.218 07:02:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:16.218 07:02:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:16.218 07:02:23 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.218 07:02:23 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.218 07:02:23 -- host/identify.sh@14 -- # nvmftestinit 00:22:16.218 07:02:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:16.218 07:02:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.218 07:02:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:16.218 07:02:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:16.218 07:02:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:16.218 07:02:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.218 07:02:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.218 07:02:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.218 07:02:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:16.218 07:02:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:16.218 07:02:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:16.218 07:02:23 -- common/autotest_common.sh@10 -- # set +x 00:22:18.117 07:02:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:18.117 07:02:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:18.117 07:02:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:18.117 07:02:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:18.117 07:02:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:18.117 07:02:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:18.117 07:02:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:18.117 07:02:24 -- nvmf/common.sh@294 -- # net_devs=() 00:22:18.117 07:02:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:18.117 07:02:24 -- nvmf/common.sh@295 -- # e810=() 00:22:18.117 07:02:24 -- nvmf/common.sh@295 -- # local -ga e810 00:22:18.117 07:02:24 -- nvmf/common.sh@296 -- # x722=() 00:22:18.117 07:02:24 -- nvmf/common.sh@296 -- # local -ga x722 00:22:18.117 07:02:24 -- nvmf/common.sh@297 -- # mlx=() 00:22:18.117 07:02:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:18.117 07:02:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.117 07:02:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.117 07:02:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.117 07:02:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.117 07:02:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.117 07:02:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.117 07:02:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.117 07:02:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.117 07:02:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.117 07:02:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.117 07:02:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.117 07:02:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:18.117 07:02:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:18.117 07:02:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:18.117 07:02:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:18.117 07:02:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:18.117 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:18.117 07:02:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:18.117 07:02:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:18.117 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:18.117 07:02:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:18.117 07:02:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:18.117 07:02:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.117 07:02:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:18.117 07:02:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.117 07:02:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:18.117 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:18.117 07:02:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.117 07:02:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:18.117 07:02:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.117 07:02:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:18.117 07:02:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.117 07:02:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:18.117 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:18.117 07:02:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.117 07:02:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:18.117 07:02:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:18.117 07:02:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:18.117 07:02:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:18.117 07:02:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.117 07:02:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.117 07:02:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.117 07:02:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:18.117 07:02:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.117 07:02:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.118 07:02:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:18.118 07:02:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.118 07:02:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.118 07:02:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:18.118 07:02:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:18.118 07:02:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.118 07:02:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.118 07:02:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.118 07:02:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.118 07:02:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:18.118 07:02:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.118 07:02:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.118 07:02:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.118 07:02:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:18.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:22:18.118 00:22:18.118 --- 10.0.0.2 ping statistics --- 00:22:18.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.118 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:18.118 07:02:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:22:18.118 00:22:18.118 --- 10.0.0.1 ping statistics --- 00:22:18.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.118 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:18.118 07:02:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.118 07:02:25 -- nvmf/common.sh@410 -- # return 0 00:22:18.118 07:02:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:18.118 07:02:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.118 07:02:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:18.118 07:02:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:18.118 07:02:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.118 07:02:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:18.118 07:02:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:18.118 07:02:25 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:18.118 07:02:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:18.118 07:02:25 -- common/autotest_common.sh@10 -- # set +x 00:22:18.118 07:02:25 -- host/identify.sh@19 -- # nvmfpid=3102195 00:22:18.118 07:02:25 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:18.118 07:02:25 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.118 07:02:25 -- host/identify.sh@23 -- # waitforlisten 3102195 00:22:18.118 07:02:25 -- common/autotest_common.sh@819 -- # '[' -z 3102195 ']' 00:22:18.118 07:02:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.118 07:02:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:18.118 07:02:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.118 07:02:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:18.118 07:02:25 -- common/autotest_common.sh@10 -- # set +x 00:22:18.118 [2024-05-12 07:02:25.150511] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:18.118 [2024-05-12 07:02:25.150582] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.118 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.118 [2024-05-12 07:02:25.212675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:18.377 [2024-05-12 07:02:25.320238] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:18.377 [2024-05-12 07:02:25.320376] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.377 [2024-05-12 07:02:25.320394] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.377 [2024-05-12 07:02:25.320417] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.377 [2024-05-12 07:02:25.321720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.377 [2024-05-12 07:02:25.321756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.377 [2024-05-12 07:02:25.321784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.377 [2024-05-12 07:02:25.321787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.313 07:02:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:19.313 07:02:26 -- common/autotest_common.sh@852 -- # return 0 00:22:19.313 07:02:26 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:19.313 07:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.313 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:19.313 [2024-05-12 07:02:26.142360] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.313 07:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.313 07:02:26 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:19.313 07:02:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:19.313 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:19.313 07:02:26 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:19.313 07:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.313 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:19.313 Malloc0 00:22:19.313 07:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.313 07:02:26 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.313 07:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.313 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:19.313 07:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.313 07:02:26 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:19.313 07:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.313 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:19.313 07:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.313 07:02:26 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.313 07:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.313 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:19.313 [2024-05-12 07:02:26.223949] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.313 07:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.313 07:02:26 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:19.313 07:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.313 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:19.313 07:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.313 07:02:26 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:19.313 07:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.313 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:19.313 [2024-05-12 07:02:26.239711] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:19.313 [ 00:22:19.313 { 00:22:19.313 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:19.313 "subtype": "Discovery", 00:22:19.313 "listen_addresses": [ 00:22:19.313 { 00:22:19.313 "transport": "TCP", 00:22:19.313 "trtype": "TCP", 00:22:19.313 "adrfam": "IPv4", 00:22:19.313 "traddr": "10.0.0.2", 00:22:19.313 "trsvcid": "4420" 00:22:19.313 } 00:22:19.313 ], 00:22:19.313 "allow_any_host": true, 00:22:19.313 "hosts": [] 00:22:19.313 }, 00:22:19.313 { 00:22:19.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.313 "subtype": "NVMe", 00:22:19.313 "listen_addresses": [ 00:22:19.313 { 00:22:19.313 "transport": "TCP", 00:22:19.313 "trtype": "TCP", 00:22:19.313 "adrfam": "IPv4", 00:22:19.313 "traddr": "10.0.0.2", 00:22:19.313 "trsvcid": "4420" 00:22:19.313 } 00:22:19.313 ], 00:22:19.313 "allow_any_host": true, 00:22:19.314 "hosts": [], 00:22:19.314 "serial_number": "SPDK00000000000001", 00:22:19.314 "model_number": "SPDK bdev Controller", 00:22:19.314 "max_namespaces": 32, 00:22:19.314 "min_cntlid": 1, 00:22:19.314 "max_cntlid": 65519, 00:22:19.314 "namespaces": [ 00:22:19.314 { 00:22:19.314 "nsid": 1, 00:22:19.314 "bdev_name": "Malloc0", 00:22:19.314 "name": "Malloc0", 00:22:19.314 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:19.314 "eui64": "ABCDEF0123456789", 00:22:19.314 "uuid": "14777187-5d7c-4b58-aaf8-f8c582e241ea" 00:22:19.314 } 00:22:19.314 ] 00:22:19.314 } 00:22:19.314 ] 00:22:19.314 07:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.314 07:02:26 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:19.314 [2024-05-12 07:02:26.264924] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:19.314 [2024-05-12 07:02:26.264967] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102351 ] 00:22:19.314 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.314 [2024-05-12 07:02:26.301087] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:19.314 [2024-05-12 07:02:26.301151] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:19.314 [2024-05-12 07:02:26.301161] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:19.314 [2024-05-12 07:02:26.301177] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:19.314 [2024-05-12 07:02:26.301190] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:19.314 [2024-05-12 07:02:26.301612] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:19.314 [2024-05-12 07:02:26.301667] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbf6e10 0 00:22:19.314 [2024-05-12 07:02:26.307707] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:19.314 [2024-05-12 07:02:26.307731] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:19.314 [2024-05-12 07:02:26.307741] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:19.314 [2024-05-12 07:02:26.307747] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:19.314 [2024-05-12 07:02:26.307806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.307820] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.307828] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf6e10) 00:22:19.314 [2024-05-12 07:02:26.307848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:19.314 [2024-05-12 07:02:26.307877] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc76bf0, cid 0, qid 0 00:22:19.314 [2024-05-12 07:02:26.315711] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.314 [2024-05-12 07:02:26.315729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.314 [2024-05-12 07:02:26.315736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.315743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc76bf0) on tqpair=0xbf6e10 00:22:19.314 [2024-05-12 07:02:26.315765] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:19.314 [2024-05-12 07:02:26.315776] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:19.314 [2024-05-12 07:02:26.315786] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:19.314 [2024-05-12 07:02:26.315810] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.315819] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.315825] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf6e10) 00:22:19.314 [2024-05-12 07:02:26.315836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.314 [2024-05-12 07:02:26.315858] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc76bf0, cid 0, qid 0 00:22:19.314 [2024-05-12 07:02:26.316055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.314 [2024-05-12 07:02:26.316067] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.314 [2024-05-12 07:02:26.316074] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.316081] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc76bf0) on tqpair=0xbf6e10 00:22:19.314 [2024-05-12 07:02:26.316097] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:19.314 [2024-05-12 07:02:26.316111] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:19.314 [2024-05-12 07:02:26.316123] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.316130] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.316136] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf6e10) 00:22:19.314 [2024-05-12 07:02:26.316147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.314 [2024-05-12 07:02:26.316168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc76bf0, cid 0, qid 0 00:22:19.314 [2024-05-12 07:02:26.316339] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.314 [2024-05-12 07:02:26.316355] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.314 [2024-05-12 07:02:26.316363] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.316369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc76bf0) on tqpair=0xbf6e10 00:22:19.314 [2024-05-12 07:02:26.316379] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:19.314 [2024-05-12 07:02:26.316394] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:19.314 [2024-05-12 07:02:26.316406] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.316413] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.316420] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf6e10) 00:22:19.314 [2024-05-12 07:02:26.316430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.314 [2024-05-12 07:02:26.316450] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc76bf0, cid 0, qid 0 00:22:19.314 [2024-05-12 07:02:26.316590] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.314 [2024-05-12 07:02:26.316602] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.314 [2024-05-12 07:02:26.316608] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.316615] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc76bf0) on tqpair=0xbf6e10 00:22:19.314 [2024-05-12 07:02:26.316624] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:19.314 [2024-05-12 07:02:26.316640] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.316649] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.316656] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf6e10) 00:22:19.314 [2024-05-12 07:02:26.316666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.314 [2024-05-12 07:02:26.316686] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc76bf0, cid 0, qid 0 00:22:19.314 [2024-05-12 07:02:26.316844] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.314 [2024-05-12 07:02:26.316860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.314 [2024-05-12 07:02:26.316866] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.316873] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc76bf0) on tqpair=0xbf6e10 00:22:19.314 [2024-05-12 07:02:26.316883] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:19.314 [2024-05-12 07:02:26.316892] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:19.314 [2024-05-12 07:02:26.316906] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:19.314 [2024-05-12 07:02:26.317017] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:19.314 [2024-05-12 07:02:26.317040] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:19.314 [2024-05-12 07:02:26.317059] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.317067] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.317073] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf6e10) 00:22:19.314 [2024-05-12 07:02:26.317083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.314 [2024-05-12 07:02:26.317108] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc76bf0, cid 0, qid 0 00:22:19.314 [2024-05-12 07:02:26.317291] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.314 [2024-05-12 07:02:26.317303] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.314 [2024-05-12 07:02:26.317309] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.317316] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc76bf0) on tqpair=0xbf6e10 00:22:19.314 [2024-05-12 07:02:26.317326] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:19.314 [2024-05-12 07:02:26.317341] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.317350] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.317356] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf6e10) 00:22:19.314 [2024-05-12 07:02:26.317367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.314 [2024-05-12 07:02:26.317387] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc76bf0, cid 0, qid 0 00:22:19.314 [2024-05-12 07:02:26.317531] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.314 [2024-05-12 07:02:26.317542] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.314 [2024-05-12 07:02:26.317549] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.314 [2024-05-12 07:02:26.317556] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc76bf0) on tqpair=0xbf6e10 00:22:19.314 [2024-05-12 07:02:26.317564] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:19.315 [2024-05-12 07:02:26.317573] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:19.315 [2024-05-12 07:02:26.317587] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:19.315 [2024-05-12 07:02:26.317601] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:19.315 [2024-05-12 07:02:26.317619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.317627] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.317633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf6e10) 00:22:19.315 [2024-05-12 07:02:26.317644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.315 [2024-05-12 07:02:26.317665] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc76bf0, cid 0, qid 0 00:22:19.315 [2024-05-12 07:02:26.317885] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.315 [2024-05-12 07:02:26.317901] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.315 [2024-05-12 07:02:26.317908] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.317915] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf6e10): datao=0, datal=4096, cccid=0 00:22:19.315 [2024-05-12 07:02:26.317923] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc76bf0) on tqpair(0xbf6e10): expected_datao=0, payload_size=4096 00:22:19.315 [2024-05-12 07:02:26.317938] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.317947] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.317992] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.315 [2024-05-12 07:02:26.318003] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.315 [2024-05-12 07:02:26.318014] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc76bf0) on tqpair=0xbf6e10 00:22:19.315 [2024-05-12 07:02:26.318035] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:19.315 [2024-05-12 07:02:26.318049] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:19.315 [2024-05-12 07:02:26.318057] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:19.315 [2024-05-12 07:02:26.318067] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:19.315 [2024-05-12 07:02:26.318074] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:19.315 [2024-05-12 07:02:26.318083] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:19.315 [2024-05-12 07:02:26.318098] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:19.315 [2024-05-12 07:02:26.318111] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318125] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf6e10) 00:22:19.315 [2024-05-12 07:02:26.318136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:19.315 [2024-05-12 07:02:26.318157] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc76bf0, cid 0, qid 0 00:22:19.315 [2024-05-12 07:02:26.318341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.315 [2024-05-12 07:02:26.318353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.315 [2024-05-12 07:02:26.318360] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318366] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc76bf0) on tqpair=0xbf6e10 00:22:19.315 [2024-05-12 07:02:26.318383] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318390] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318397] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf6e10) 00:22:19.315 [2024-05-12 07:02:26.318406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.315 [2024-05-12 07:02:26.318416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318423] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318429] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbf6e10) 00:22:19.315 [2024-05-12 07:02:26.318437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.315 [2024-05-12 07:02:26.318447] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318460] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbf6e10) 00:22:19.315 [2024-05-12 07:02:26.318468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.315 [2024-05-12 07:02:26.318478] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318485] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318491] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.315 [2024-05-12 07:02:26.318499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.315 [2024-05-12 07:02:26.318512] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:19.315 [2024-05-12 07:02:26.318532] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:19.315 [2024-05-12 07:02:26.318544] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318551] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318573] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf6e10) 00:22:19.315 [2024-05-12 07:02:26.318585] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.315 [2024-05-12 07:02:26.318607] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc76bf0, cid 0, qid 0 00:22:19.315 [2024-05-12 07:02:26.318617] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc76d50, cid 1, qid 0 00:22:19.315 [2024-05-12 07:02:26.318640] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc76eb0, cid 2, qid 0 00:22:19.315 [2024-05-12 07:02:26.318648] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.315 [2024-05-12 07:02:26.318656] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77170, cid 4, qid 0 00:22:19.315 [2024-05-12 07:02:26.318882] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.315 [2024-05-12 07:02:26.318896] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.315 [2024-05-12 07:02:26.318903] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318909] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77170) on tqpair=0xbf6e10 00:22:19.315 [2024-05-12 07:02:26.318920] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:19.315 [2024-05-12 07:02:26.318929] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:19.315 [2024-05-12 07:02:26.318946] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318955] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.318962] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf6e10) 00:22:19.315 [2024-05-12 07:02:26.318972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.315 [2024-05-12 07:02:26.318993] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77170, cid 4, qid 0 00:22:19.315 [2024-05-12 07:02:26.319177] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.315 [2024-05-12 07:02:26.319189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.315 [2024-05-12 07:02:26.319196] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.319202] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf6e10): datao=0, datal=4096, cccid=4 00:22:19.315 [2024-05-12 07:02:26.319210] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc77170) on tqpair(0xbf6e10): expected_datao=0, payload_size=4096 00:22:19.315 [2024-05-12 07:02:26.319256] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.319265] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.319372] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.315 [2024-05-12 07:02:26.319387] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.315 [2024-05-12 07:02:26.319393] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.319400] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77170) on tqpair=0xbf6e10 00:22:19.315 [2024-05-12 07:02:26.319424] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:19.315 [2024-05-12 07:02:26.319462] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.319471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.319478] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf6e10) 00:22:19.315 [2024-05-12 07:02:26.319488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.315 [2024-05-12 07:02:26.319501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.319508] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.319514] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbf6e10) 00:22:19.315 [2024-05-12 07:02:26.319523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.315 [2024-05-12 07:02:26.319552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77170, cid 4, qid 0 00:22:19.315 [2024-05-12 07:02:26.319564] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc772d0, cid 5, qid 0 00:22:19.315 [2024-05-12 07:02:26.323724] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.315 [2024-05-12 07:02:26.323741] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.315 [2024-05-12 07:02:26.323748] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.323754] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf6e10): datao=0, datal=1024, cccid=4 00:22:19.315 [2024-05-12 07:02:26.323761] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc77170) on tqpair(0xbf6e10): expected_datao=0, payload_size=1024 00:22:19.315 [2024-05-12 07:02:26.323772] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.323779] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.315 [2024-05-12 07:02:26.323787] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.315 [2024-05-12 07:02:26.323796] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.316 [2024-05-12 07:02:26.323802] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.316 [2024-05-12 07:02:26.323809] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc772d0) on tqpair=0xbf6e10 00:22:19.316 [2024-05-12 07:02:26.360904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.316 [2024-05-12 07:02:26.360922] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.316 [2024-05-12 07:02:26.360930] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.316 [2024-05-12 07:02:26.360937] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77170) on tqpair=0xbf6e10 00:22:19.316 [2024-05-12 07:02:26.360956] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.316 [2024-05-12 07:02:26.360966] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.316 [2024-05-12 07:02:26.360972] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf6e10) 00:22:19.316 [2024-05-12 07:02:26.360983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.316 [2024-05-12 07:02:26.361013] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77170, cid 4, qid 0 00:22:19.316 [2024-05-12 07:02:26.361263] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.316 [2024-05-12 07:02:26.361276] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.316 [2024-05-12 07:02:26.361282] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.316 [2024-05-12 07:02:26.361289] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf6e10): datao=0, datal=3072, cccid=4 00:22:19.316 [2024-05-12 07:02:26.361296] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc77170) on tqpair(0xbf6e10): expected_datao=0, payload_size=3072 00:22:19.316 [2024-05-12 07:02:26.361338] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.316 [2024-05-12 07:02:26.361349] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.316 [2024-05-12 07:02:26.401863] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.316 [2024-05-12 07:02:26.401881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.316 [2024-05-12 07:02:26.401888] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.316 [2024-05-12 07:02:26.401895] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77170) on tqpair=0xbf6e10 00:22:19.316 [2024-05-12 07:02:26.401911] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.316 [2024-05-12 07:02:26.401921] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.316 [2024-05-12 07:02:26.401928] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf6e10) 00:22:19.316 [2024-05-12 07:02:26.401939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.316 [2024-05-12 07:02:26.401967] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77170, cid 4, qid 0 00:22:19.316 [2024-05-12 07:02:26.402125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.316 [2024-05-12 07:02:26.402137] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.316 [2024-05-12 07:02:26.402144] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.316 [2024-05-12 07:02:26.402150] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf6e10): datao=0, datal=8, cccid=4 00:22:19.316 [2024-05-12 07:02:26.402158] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc77170) on tqpair(0xbf6e10): expected_datao=0, payload_size=8 00:22:19.316 [2024-05-12 07:02:26.402169] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.316 [2024-05-12 07:02:26.402176] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.579 [2024-05-12 07:02:26.445722] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.579 [2024-05-12 07:02:26.445770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.579 [2024-05-12 07:02:26.445778] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.579 [2024-05-12 07:02:26.445786] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77170) on tqpair=0xbf6e10 00:22:19.579 ===================================================== 00:22:19.579 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:19.579 ===================================================== 00:22:19.579 Controller Capabilities/Features 00:22:19.579 ================================ 00:22:19.579 Vendor ID: 0000 00:22:19.579 Subsystem Vendor ID: 0000 00:22:19.579 Serial Number: .................... 00:22:19.579 Model Number: ........................................ 00:22:19.579 Firmware Version: 24.01.1 00:22:19.579 Recommended Arb Burst: 0 00:22:19.579 IEEE OUI Identifier: 00 00 00 00:22:19.579 Multi-path I/O 00:22:19.579 May have multiple subsystem ports: No 00:22:19.579 May have multiple controllers: No 00:22:19.579 Associated with SR-IOV VF: No 00:22:19.579 Max Data Transfer Size: 131072 00:22:19.579 Max Number of Namespaces: 0 00:22:19.579 Max Number of I/O Queues: 1024 00:22:19.579 NVMe Specification Version (VS): 1.3 00:22:19.579 NVMe Specification Version (Identify): 1.3 00:22:19.579 Maximum Queue Entries: 128 00:22:19.579 Contiguous Queues Required: Yes 00:22:19.579 Arbitration Mechanisms Supported 00:22:19.579 Weighted Round Robin: Not Supported 00:22:19.579 Vendor Specific: Not Supported 00:22:19.579 Reset Timeout: 15000 ms 00:22:19.579 Doorbell Stride: 4 bytes 00:22:19.579 NVM Subsystem Reset: Not Supported 00:22:19.579 Command Sets Supported 00:22:19.579 NVM Command Set: Supported 00:22:19.579 Boot Partition: Not Supported 00:22:19.579 Memory Page Size Minimum: 4096 bytes 00:22:19.579 Memory Page Size Maximum: 4096 bytes 00:22:19.579 Persistent Memory Region: Not Supported 00:22:19.579 Optional Asynchronous Events Supported 00:22:19.579 Namespace Attribute Notices: Not Supported 00:22:19.579 Firmware Activation Notices: Not Supported 00:22:19.579 ANA Change Notices: Not Supported 00:22:19.579 PLE Aggregate Log Change Notices: Not Supported 00:22:19.579 LBA Status Info Alert Notices: Not Supported 00:22:19.579 EGE Aggregate Log Change Notices: Not Supported 00:22:19.579 Normal NVM Subsystem Shutdown event: Not Supported 00:22:19.579 Zone Descriptor Change Notices: Not Supported 00:22:19.579 Discovery Log Change Notices: Supported 00:22:19.579 Controller Attributes 00:22:19.579 128-bit Host Identifier: Not Supported 00:22:19.579 Non-Operational Permissive Mode: Not Supported 00:22:19.579 NVM Sets: Not Supported 00:22:19.579 Read Recovery Levels: Not Supported 00:22:19.579 Endurance Groups: Not Supported 00:22:19.579 Predictable Latency Mode: Not Supported 00:22:19.579 Traffic Based Keep ALive: Not Supported 00:22:19.579 Namespace Granularity: Not Supported 00:22:19.579 SQ Associations: Not Supported 00:22:19.579 UUID List: Not Supported 00:22:19.579 Multi-Domain Subsystem: Not Supported 00:22:19.579 Fixed Capacity Management: Not Supported 00:22:19.579 Variable Capacity Management: Not Supported 00:22:19.579 Delete Endurance Group: Not Supported 00:22:19.579 Delete NVM Set: Not Supported 00:22:19.579 Extended LBA Formats Supported: Not Supported 00:22:19.579 Flexible Data Placement Supported: Not Supported 00:22:19.579 00:22:19.579 Controller Memory Buffer Support 00:22:19.579 ================================ 00:22:19.579 Supported: No 00:22:19.579 00:22:19.579 Persistent Memory Region Support 00:22:19.579 ================================ 00:22:19.579 Supported: No 00:22:19.579 00:22:19.579 Admin Command Set Attributes 00:22:19.579 ============================ 00:22:19.579 Security Send/Receive: Not Supported 00:22:19.579 Format NVM: Not Supported 00:22:19.579 Firmware Activate/Download: Not Supported 00:22:19.579 Namespace Management: Not Supported 00:22:19.579 Device Self-Test: Not Supported 00:22:19.579 Directives: Not Supported 00:22:19.579 NVMe-MI: Not Supported 00:22:19.579 Virtualization Management: Not Supported 00:22:19.579 Doorbell Buffer Config: Not Supported 00:22:19.579 Get LBA Status Capability: Not Supported 00:22:19.579 Command & Feature Lockdown Capability: Not Supported 00:22:19.579 Abort Command Limit: 1 00:22:19.579 Async Event Request Limit: 4 00:22:19.579 Number of Firmware Slots: N/A 00:22:19.580 Firmware Slot 1 Read-Only: N/A 00:22:19.580 Firmware Activation Without Reset: N/A 00:22:19.580 Multiple Update Detection Support: N/A 00:22:19.580 Firmware Update Granularity: No Information Provided 00:22:19.580 Per-Namespace SMART Log: No 00:22:19.580 Asymmetric Namespace Access Log Page: Not Supported 00:22:19.580 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:19.580 Command Effects Log Page: Not Supported 00:22:19.580 Get Log Page Extended Data: Supported 00:22:19.580 Telemetry Log Pages: Not Supported 00:22:19.580 Persistent Event Log Pages: Not Supported 00:22:19.580 Supported Log Pages Log Page: May Support 00:22:19.580 Commands Supported & Effects Log Page: Not Supported 00:22:19.580 Feature Identifiers & Effects Log Page:May Support 00:22:19.580 NVMe-MI Commands & Effects Log Page: May Support 00:22:19.580 Data Area 4 for Telemetry Log: Not Supported 00:22:19.580 Error Log Page Entries Supported: 128 00:22:19.580 Keep Alive: Not Supported 00:22:19.580 00:22:19.580 NVM Command Set Attributes 00:22:19.580 ========================== 00:22:19.580 Submission Queue Entry Size 00:22:19.580 Max: 1 00:22:19.580 Min: 1 00:22:19.580 Completion Queue Entry Size 00:22:19.580 Max: 1 00:22:19.580 Min: 1 00:22:19.580 Number of Namespaces: 0 00:22:19.580 Compare Command: Not Supported 00:22:19.580 Write Uncorrectable Command: Not Supported 00:22:19.580 Dataset Management Command: Not Supported 00:22:19.580 Write Zeroes Command: Not Supported 00:22:19.580 Set Features Save Field: Not Supported 00:22:19.580 Reservations: Not Supported 00:22:19.580 Timestamp: Not Supported 00:22:19.580 Copy: Not Supported 00:22:19.580 Volatile Write Cache: Not Present 00:22:19.580 Atomic Write Unit (Normal): 1 00:22:19.580 Atomic Write Unit (PFail): 1 00:22:19.580 Atomic Compare & Write Unit: 1 00:22:19.580 Fused Compare & Write: Supported 00:22:19.580 Scatter-Gather List 00:22:19.580 SGL Command Set: Supported 00:22:19.580 SGL Keyed: Supported 00:22:19.580 SGL Bit Bucket Descriptor: Not Supported 00:22:19.580 SGL Metadata Pointer: Not Supported 00:22:19.580 Oversized SGL: Not Supported 00:22:19.580 SGL Metadata Address: Not Supported 00:22:19.580 SGL Offset: Supported 00:22:19.580 Transport SGL Data Block: Not Supported 00:22:19.580 Replay Protected Memory Block: Not Supported 00:22:19.580 00:22:19.580 Firmware Slot Information 00:22:19.580 ========================= 00:22:19.580 Active slot: 0 00:22:19.580 00:22:19.580 00:22:19.580 Error Log 00:22:19.580 ========= 00:22:19.580 00:22:19.580 Active Namespaces 00:22:19.580 ================= 00:22:19.580 Discovery Log Page 00:22:19.580 ================== 00:22:19.580 Generation Counter: 2 00:22:19.580 Number of Records: 2 00:22:19.580 Record Format: 0 00:22:19.580 00:22:19.580 Discovery Log Entry 0 00:22:19.580 ---------------------- 00:22:19.580 Transport Type: 3 (TCP) 00:22:19.580 Address Family: 1 (IPv4) 00:22:19.580 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:19.580 Entry Flags: 00:22:19.580 Duplicate Returned Information: 1 00:22:19.580 Explicit Persistent Connection Support for Discovery: 1 00:22:19.580 Transport Requirements: 00:22:19.580 Secure Channel: Not Required 00:22:19.580 Port ID: 0 (0x0000) 00:22:19.580 Controller ID: 65535 (0xffff) 00:22:19.580 Admin Max SQ Size: 128 00:22:19.580 Transport Service Identifier: 4420 00:22:19.580 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:19.580 Transport Address: 10.0.0.2 00:22:19.580 Discovery Log Entry 1 00:22:19.580 ---------------------- 00:22:19.580 Transport Type: 3 (TCP) 00:22:19.580 Address Family: 1 (IPv4) 00:22:19.580 Subsystem Type: 2 (NVM Subsystem) 00:22:19.580 Entry Flags: 00:22:19.580 Duplicate Returned Information: 0 00:22:19.580 Explicit Persistent Connection Support for Discovery: 0 00:22:19.580 Transport Requirements: 00:22:19.580 Secure Channel: Not Required 00:22:19.580 Port ID: 0 (0x0000) 00:22:19.580 Controller ID: 65535 (0xffff) 00:22:19.580 Admin Max SQ Size: 128 00:22:19.580 Transport Service Identifier: 4420 00:22:19.580 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:19.580 Transport Address: 10.0.0.2 [2024-05-12 07:02:26.445907] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:19.580 [2024-05-12 07:02:26.445934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.580 [2024-05-12 07:02:26.445946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.580 [2024-05-12 07:02:26.445966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.580 [2024-05-12 07:02:26.445975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.580 [2024-05-12 07:02:26.445993] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.446002] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.446009] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.580 [2024-05-12 07:02:26.446020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.580 [2024-05-12 07:02:26.446046] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.580 [2024-05-12 07:02:26.446313] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.580 [2024-05-12 07:02:26.446337] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.580 [2024-05-12 07:02:26.446351] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.446362] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.580 [2024-05-12 07:02:26.446376] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.446384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.446391] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.580 [2024-05-12 07:02:26.446402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.580 [2024-05-12 07:02:26.446431] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.580 [2024-05-12 07:02:26.446651] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.580 [2024-05-12 07:02:26.446664] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.580 [2024-05-12 07:02:26.446671] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.446678] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.580 [2024-05-12 07:02:26.446687] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:19.580 [2024-05-12 07:02:26.446706] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:19.580 [2024-05-12 07:02:26.446728] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.446737] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.446744] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.580 [2024-05-12 07:02:26.446755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.580 [2024-05-12 07:02:26.446776] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.580 [2024-05-12 07:02:26.446951] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.580 [2024-05-12 07:02:26.446964] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.580 [2024-05-12 07:02:26.446970] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.446977] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.580 [2024-05-12 07:02:26.446994] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.447003] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.447010] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.580 [2024-05-12 07:02:26.447023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.580 [2024-05-12 07:02:26.447053] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.580 [2024-05-12 07:02:26.447195] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.580 [2024-05-12 07:02:26.447207] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.580 [2024-05-12 07:02:26.447214] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.447221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.580 [2024-05-12 07:02:26.447238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.447253] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.447265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.580 [2024-05-12 07:02:26.447282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.580 [2024-05-12 07:02:26.447315] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.580 [2024-05-12 07:02:26.447490] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.580 [2024-05-12 07:02:26.447511] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.580 [2024-05-12 07:02:26.447519] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.447525] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.580 [2024-05-12 07:02:26.447543] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.447552] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.580 [2024-05-12 07:02:26.447559] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.581 [2024-05-12 07:02:26.447570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.581 [2024-05-12 07:02:26.447591] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.581 [2024-05-12 07:02:26.447736] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.581 [2024-05-12 07:02:26.447757] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.581 [2024-05-12 07:02:26.447764] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.447771] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.581 [2024-05-12 07:02:26.447787] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.447796] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.447803] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.581 [2024-05-12 07:02:26.447813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.581 [2024-05-12 07:02:26.447834] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.581 [2024-05-12 07:02:26.448030] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.581 [2024-05-12 07:02:26.448045] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.581 [2024-05-12 07:02:26.448052] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.448059] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.581 [2024-05-12 07:02:26.448075] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.448085] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.448091] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.581 [2024-05-12 07:02:26.448102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.581 [2024-05-12 07:02:26.448122] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.581 [2024-05-12 07:02:26.448294] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.581 [2024-05-12 07:02:26.448310] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.581 [2024-05-12 07:02:26.448317] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.448323] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.581 [2024-05-12 07:02:26.448340] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.448349] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.448356] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.581 [2024-05-12 07:02:26.448367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.581 [2024-05-12 07:02:26.448388] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.581 [2024-05-12 07:02:26.448530] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.581 [2024-05-12 07:02:26.448545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.581 [2024-05-12 07:02:26.448556] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.448563] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.581 [2024-05-12 07:02:26.448580] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.448589] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.448596] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.581 [2024-05-12 07:02:26.448606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.581 [2024-05-12 07:02:26.448627] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.581 [2024-05-12 07:02:26.448823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.581 [2024-05-12 07:02:26.448839] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.581 [2024-05-12 07:02:26.448846] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.448852] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.581 [2024-05-12 07:02:26.448869] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.448879] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.448885] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.581 [2024-05-12 07:02:26.448896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.581 [2024-05-12 07:02:26.448917] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.581 [2024-05-12 07:02:26.449122] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.581 [2024-05-12 07:02:26.449137] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.581 [2024-05-12 07:02:26.449144] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.449151] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.581 [2024-05-12 07:02:26.449168] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.449177] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.449184] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.581 [2024-05-12 07:02:26.449194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.581 [2024-05-12 07:02:26.449215] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.581 [2024-05-12 07:02:26.449385] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.581 [2024-05-12 07:02:26.449400] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.581 [2024-05-12 07:02:26.449408] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.449417] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.581 [2024-05-12 07:02:26.449434] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.449443] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.449450] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.581 [2024-05-12 07:02:26.449461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.581 [2024-05-12 07:02:26.449482] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.581 [2024-05-12 07:02:26.449675] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.581 [2024-05-12 07:02:26.449690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.581 [2024-05-12 07:02:26.452767] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.452786] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.581 [2024-05-12 07:02:26.452806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.452815] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.452822] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf6e10) 00:22:19.581 [2024-05-12 07:02:26.452833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.581 [2024-05-12 07:02:26.452854] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77010, cid 3, qid 0 00:22:19.581 [2024-05-12 07:02:26.453071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.581 [2024-05-12 07:02:26.453084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.581 [2024-05-12 07:02:26.453091] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.453098] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc77010) on tqpair=0xbf6e10 00:22:19.581 [2024-05-12 07:02:26.453111] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:19.581 00:22:19.581 07:02:26 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:19.581 [2024-05-12 07:02:26.486318] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:19.581 [2024-05-12 07:02:26.486359] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102384 ] 00:22:19.581 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.581 [2024-05-12 07:02:26.518947] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:19.581 [2024-05-12 07:02:26.519000] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:19.581 [2024-05-12 07:02:26.519025] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:19.581 [2024-05-12 07:02:26.519039] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:19.581 [2024-05-12 07:02:26.519051] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:19.581 [2024-05-12 07:02:26.522732] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:19.581 [2024-05-12 07:02:26.522769] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcd8e10 0 00:22:19.581 [2024-05-12 07:02:26.530714] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:19.581 [2024-05-12 07:02:26.530732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:19.581 [2024-05-12 07:02:26.530740] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:19.581 [2024-05-12 07:02:26.530746] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:19.581 [2024-05-12 07:02:26.530799] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.530812] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.530819] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8e10) 00:22:19.581 [2024-05-12 07:02:26.530833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:19.581 [2024-05-12 07:02:26.530859] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd58bf0, cid 0, qid 0 00:22:19.581 [2024-05-12 07:02:26.538725] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.581 [2024-05-12 07:02:26.538746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.581 [2024-05-12 07:02:26.538754] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.581 [2024-05-12 07:02:26.538761] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd58bf0) on tqpair=0xcd8e10 00:22:19.581 [2024-05-12 07:02:26.538779] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:19.581 [2024-05-12 07:02:26.538806] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:19.582 [2024-05-12 07:02:26.538816] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:19.582 [2024-05-12 07:02:26.538835] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.538843] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.538850] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8e10) 00:22:19.582 [2024-05-12 07:02:26.538862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.582 [2024-05-12 07:02:26.538885] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd58bf0, cid 0, qid 0 00:22:19.582 [2024-05-12 07:02:26.539116] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.582 [2024-05-12 07:02:26.539132] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.582 [2024-05-12 07:02:26.539139] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.539146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd58bf0) on tqpair=0xcd8e10 00:22:19.582 [2024-05-12 07:02:26.539158] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:19.582 [2024-05-12 07:02:26.539173] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:19.582 [2024-05-12 07:02:26.539186] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.539193] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.539199] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8e10) 00:22:19.582 [2024-05-12 07:02:26.539210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.582 [2024-05-12 07:02:26.539231] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd58bf0, cid 0, qid 0 00:22:19.582 [2024-05-12 07:02:26.539413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.582 [2024-05-12 07:02:26.539425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.582 [2024-05-12 07:02:26.539432] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.539439] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd58bf0) on tqpair=0xcd8e10 00:22:19.582 [2024-05-12 07:02:26.539447] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:19.582 [2024-05-12 07:02:26.539461] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:19.582 [2024-05-12 07:02:26.539473] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.539481] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.539487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8e10) 00:22:19.582 [2024-05-12 07:02:26.539497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.582 [2024-05-12 07:02:26.539518] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd58bf0, cid 0, qid 0 00:22:19.582 [2024-05-12 07:02:26.539706] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.582 [2024-05-12 07:02:26.539722] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.582 [2024-05-12 07:02:26.539733] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.539740] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd58bf0) on tqpair=0xcd8e10 00:22:19.582 [2024-05-12 07:02:26.539749] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:19.582 [2024-05-12 07:02:26.539767] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.539776] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.539783] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8e10) 00:22:19.582 [2024-05-12 07:02:26.539793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.582 [2024-05-12 07:02:26.539814] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd58bf0, cid 0, qid 0 00:22:19.582 [2024-05-12 07:02:26.539993] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.582 [2024-05-12 07:02:26.540011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.582 [2024-05-12 07:02:26.540018] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.540025] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd58bf0) on tqpair=0xcd8e10 00:22:19.582 [2024-05-12 07:02:26.540033] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:19.582 [2024-05-12 07:02:26.540042] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:19.582 [2024-05-12 07:02:26.540055] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:19.582 [2024-05-12 07:02:26.540165] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:19.582 [2024-05-12 07:02:26.540172] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:19.582 [2024-05-12 07:02:26.540184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.540192] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.540198] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8e10) 00:22:19.582 [2024-05-12 07:02:26.540209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.582 [2024-05-12 07:02:26.540244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd58bf0, cid 0, qid 0 00:22:19.582 [2024-05-12 07:02:26.540477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.582 [2024-05-12 07:02:26.540502] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.582 [2024-05-12 07:02:26.540509] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.540516] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd58bf0) on tqpair=0xcd8e10 00:22:19.582 [2024-05-12 07:02:26.540524] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:19.582 [2024-05-12 07:02:26.540541] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.540551] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.540566] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8e10) 00:22:19.582 [2024-05-12 07:02:26.540577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.582 [2024-05-12 07:02:26.540597] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd58bf0, cid 0, qid 0 00:22:19.582 [2024-05-12 07:02:26.540827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.582 [2024-05-12 07:02:26.540845] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.582 [2024-05-12 07:02:26.540852] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.540859] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd58bf0) on tqpair=0xcd8e10 00:22:19.582 [2024-05-12 07:02:26.540867] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:19.582 [2024-05-12 07:02:26.540876] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:19.582 [2024-05-12 07:02:26.540889] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:19.582 [2024-05-12 07:02:26.540903] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:19.582 [2024-05-12 07:02:26.540916] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.540924] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.540931] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8e10) 00:22:19.582 [2024-05-12 07:02:26.540941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.582 [2024-05-12 07:02:26.540962] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd58bf0, cid 0, qid 0 00:22:19.582 [2024-05-12 07:02:26.541196] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.582 [2024-05-12 07:02:26.541211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.582 [2024-05-12 07:02:26.541218] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.541225] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8e10): datao=0, datal=4096, cccid=0 00:22:19.582 [2024-05-12 07:02:26.541232] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd58bf0) on tqpair(0xcd8e10): expected_datao=0, payload_size=4096 00:22:19.582 [2024-05-12 07:02:26.541264] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.541273] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.581908] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.582 [2024-05-12 07:02:26.581927] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.582 [2024-05-12 07:02:26.581934] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.581941] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd58bf0) on tqpair=0xcd8e10 00:22:19.582 [2024-05-12 07:02:26.581952] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:19.582 [2024-05-12 07:02:26.581966] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:19.582 [2024-05-12 07:02:26.581974] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:19.582 [2024-05-12 07:02:26.581991] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:19.582 [2024-05-12 07:02:26.581998] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:19.582 [2024-05-12 07:02:26.582007] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:19.582 [2024-05-12 07:02:26.582021] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:19.582 [2024-05-12 07:02:26.582033] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.582041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.582 [2024-05-12 07:02:26.582047] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8e10) 00:22:19.582 [2024-05-12 07:02:26.582062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:19.582 [2024-05-12 07:02:26.582086] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd58bf0, cid 0, qid 0 00:22:19.582 [2024-05-12 07:02:26.582263] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.582 [2024-05-12 07:02:26.582276] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.582 [2024-05-12 07:02:26.582283] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.582290] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd58bf0) on tqpair=0xcd8e10 00:22:19.583 [2024-05-12 07:02:26.582300] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.582307] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.582314] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8e10) 00:22:19.583 [2024-05-12 07:02:26.582324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.583 [2024-05-12 07:02:26.582334] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.582341] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.582347] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcd8e10) 00:22:19.583 [2024-05-12 07:02:26.582356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.583 [2024-05-12 07:02:26.582365] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.582372] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.582378] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcd8e10) 00:22:19.583 [2024-05-12 07:02:26.582387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.583 [2024-05-12 07:02:26.582396] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.582403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.582409] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.583 [2024-05-12 07:02:26.582418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.583 [2024-05-12 07:02:26.582427] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:19.583 [2024-05-12 07:02:26.582445] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:19.583 [2024-05-12 07:02:26.582473] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.582481] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.582487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd8e10) 00:22:19.583 [2024-05-12 07:02:26.582497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.583 [2024-05-12 07:02:26.582519] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd58bf0, cid 0, qid 0 00:22:19.583 [2024-05-12 07:02:26.582545] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd58d50, cid 1, qid 0 00:22:19.583 [2024-05-12 07:02:26.582553] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd58eb0, cid 2, qid 0 00:22:19.583 [2024-05-12 07:02:26.582561] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.583 [2024-05-12 07:02:26.582569] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59170, cid 4, qid 0 00:22:19.583 [2024-05-12 07:02:26.586709] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.583 [2024-05-12 07:02:26.586729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.583 [2024-05-12 07:02:26.586737] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.586743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59170) on tqpair=0xcd8e10 00:22:19.583 [2024-05-12 07:02:26.586751] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:19.583 [2024-05-12 07:02:26.586760] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:19.583 [2024-05-12 07:02:26.586775] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:19.583 [2024-05-12 07:02:26.586801] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:19.583 [2024-05-12 07:02:26.586812] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.586820] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.586826] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd8e10) 00:22:19.583 [2024-05-12 07:02:26.586837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:19.583 [2024-05-12 07:02:26.586859] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59170, cid 4, qid 0 00:22:19.583 [2024-05-12 07:02:26.587043] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.583 [2024-05-12 07:02:26.587059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.583 [2024-05-12 07:02:26.587065] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.587072] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59170) on tqpair=0xcd8e10 00:22:19.583 [2024-05-12 07:02:26.587127] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:19.583 [2024-05-12 07:02:26.587145] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:19.583 [2024-05-12 07:02:26.587159] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.587167] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.587173] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd8e10) 00:22:19.583 [2024-05-12 07:02:26.587184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.583 [2024-05-12 07:02:26.587204] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59170, cid 4, qid 0 00:22:19.583 [2024-05-12 07:02:26.587403] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.583 [2024-05-12 07:02:26.587419] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.583 [2024-05-12 07:02:26.587426] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.587432] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8e10): datao=0, datal=4096, cccid=4 00:22:19.583 [2024-05-12 07:02:26.587440] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd59170) on tqpair(0xcd8e10): expected_datao=0, payload_size=4096 00:22:19.583 [2024-05-12 07:02:26.587451] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.587459] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.587543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.583 [2024-05-12 07:02:26.587554] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.583 [2024-05-12 07:02:26.587561] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.587571] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59170) on tqpair=0xcd8e10 00:22:19.583 [2024-05-12 07:02:26.587589] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:19.583 [2024-05-12 07:02:26.587613] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:19.583 [2024-05-12 07:02:26.587632] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:19.583 [2024-05-12 07:02:26.587646] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.587653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.587660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd8e10) 00:22:19.583 [2024-05-12 07:02:26.587670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.583 [2024-05-12 07:02:26.587708] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59170, cid 4, qid 0 00:22:19.583 [2024-05-12 07:02:26.587910] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.583 [2024-05-12 07:02:26.587925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.583 [2024-05-12 07:02:26.587932] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.587939] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8e10): datao=0, datal=4096, cccid=4 00:22:19.583 [2024-05-12 07:02:26.587946] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd59170) on tqpair(0xcd8e10): expected_datao=0, payload_size=4096 00:22:19.583 [2024-05-12 07:02:26.587958] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.587965] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.588042] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.583 [2024-05-12 07:02:26.588054] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.583 [2024-05-12 07:02:26.588061] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.588067] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59170) on tqpair=0xcd8e10 00:22:19.583 [2024-05-12 07:02:26.588090] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:19.583 [2024-05-12 07:02:26.588109] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:19.583 [2024-05-12 07:02:26.588123] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.588130] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.588137] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd8e10) 00:22:19.583 [2024-05-12 07:02:26.588148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.583 [2024-05-12 07:02:26.588168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59170, cid 4, qid 0 00:22:19.583 [2024-05-12 07:02:26.588366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.583 [2024-05-12 07:02:26.588381] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.583 [2024-05-12 07:02:26.588388] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.583 [2024-05-12 07:02:26.588394] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8e10): datao=0, datal=4096, cccid=4 00:22:19.584 [2024-05-12 07:02:26.588402] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd59170) on tqpair(0xcd8e10): expected_datao=0, payload_size=4096 00:22:19.584 [2024-05-12 07:02:26.588413] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.588420] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.588467] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.584 [2024-05-12 07:02:26.588479] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.584 [2024-05-12 07:02:26.588485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.588492] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59170) on tqpair=0xcd8e10 00:22:19.584 [2024-05-12 07:02:26.588506] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:19.584 [2024-05-12 07:02:26.588520] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:19.584 [2024-05-12 07:02:26.588535] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:19.584 [2024-05-12 07:02:26.588547] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:19.584 [2024-05-12 07:02:26.588556] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:19.584 [2024-05-12 07:02:26.588565] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:19.584 [2024-05-12 07:02:26.588573] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:19.584 [2024-05-12 07:02:26.588582] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:19.584 [2024-05-12 07:02:26.588600] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.588609] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.588615] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd8e10) 00:22:19.584 [2024-05-12 07:02:26.588626] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.584 [2024-05-12 07:02:26.588637] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.588644] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.588650] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd8e10) 00:22:19.584 [2024-05-12 07:02:26.588659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.584 [2024-05-12 07:02:26.588709] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59170, cid 4, qid 0 00:22:19.584 [2024-05-12 07:02:26.588722] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd592d0, cid 5, qid 0 00:22:19.584 [2024-05-12 07:02:26.588932] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.584 [2024-05-12 07:02:26.588945] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.584 [2024-05-12 07:02:26.588951] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.588958] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59170) on tqpair=0xcd8e10 00:22:19.584 [2024-05-12 07:02:26.588968] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.584 [2024-05-12 07:02:26.588978] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.584 [2024-05-12 07:02:26.588984] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.588991] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd592d0) on tqpair=0xcd8e10 00:22:19.584 [2024-05-12 07:02:26.589007] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.589016] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.589022] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd8e10) 00:22:19.584 [2024-05-12 07:02:26.589033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.584 [2024-05-12 07:02:26.589057] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd592d0, cid 5, qid 0 00:22:19.584 [2024-05-12 07:02:26.589234] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.584 [2024-05-12 07:02:26.589246] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.584 [2024-05-12 07:02:26.589253] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.589260] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd592d0) on tqpair=0xcd8e10 00:22:19.584 [2024-05-12 07:02:26.589276] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.589284] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.589291] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd8e10) 00:22:19.584 [2024-05-12 07:02:26.589301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.584 [2024-05-12 07:02:26.589321] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd592d0, cid 5, qid 0 00:22:19.584 [2024-05-12 07:02:26.589469] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.584 [2024-05-12 07:02:26.589483] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.584 [2024-05-12 07:02:26.589490] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.589497] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd592d0) on tqpair=0xcd8e10 00:22:19.584 [2024-05-12 07:02:26.589513] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.589522] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.589529] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd8e10) 00:22:19.584 [2024-05-12 07:02:26.589539] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.584 [2024-05-12 07:02:26.589559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd592d0, cid 5, qid 0 00:22:19.584 [2024-05-12 07:02:26.593721] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.584 [2024-05-12 07:02:26.593737] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.584 [2024-05-12 07:02:26.593744] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.593751] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd592d0) on tqpair=0xcd8e10 00:22:19.584 [2024-05-12 07:02:26.593786] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.593797] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.593804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd8e10) 00:22:19.584 [2024-05-12 07:02:26.593815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.584 [2024-05-12 07:02:26.593827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.593834] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.593841] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd8e10) 00:22:19.584 [2024-05-12 07:02:26.593850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.584 [2024-05-12 07:02:26.593861] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.593869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.593875] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xcd8e10) 00:22:19.584 [2024-05-12 07:02:26.593884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.584 [2024-05-12 07:02:26.593900] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.593908] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.593915] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcd8e10) 00:22:19.584 [2024-05-12 07:02:26.593924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.584 [2024-05-12 07:02:26.593947] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd592d0, cid 5, qid 0 00:22:19.584 [2024-05-12 07:02:26.593958] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59170, cid 4, qid 0 00:22:19.584 [2024-05-12 07:02:26.593967] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59430, cid 6, qid 0 00:22:19.584 [2024-05-12 07:02:26.593974] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59590, cid 7, qid 0 00:22:19.584 [2024-05-12 07:02:26.594229] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.584 [2024-05-12 07:02:26.594242] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.584 [2024-05-12 07:02:26.594249] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.594255] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8e10): datao=0, datal=8192, cccid=5 00:22:19.584 [2024-05-12 07:02:26.594263] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd592d0) on tqpair(0xcd8e10): expected_datao=0, payload_size=8192 00:22:19.584 [2024-05-12 07:02:26.594380] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.594391] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.594400] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.584 [2024-05-12 07:02:26.594409] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.584 [2024-05-12 07:02:26.594415] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.584 [2024-05-12 07:02:26.594422] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8e10): datao=0, datal=512, cccid=4 00:22:19.584 [2024-05-12 07:02:26.594429] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd59170) on tqpair(0xcd8e10): expected_datao=0, payload_size=512 00:22:19.585 [2024-05-12 07:02:26.594439] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.585 [2024-05-12 07:02:26.594446] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.585 [2024-05-12 07:02:26.594455] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.585 [2024-05-12 07:02:26.594464] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.585 [2024-05-12 07:02:26.594470] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.585 [2024-05-12 07:02:26.594476] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8e10): datao=0, datal=512, cccid=6 00:22:19.585 [2024-05-12 07:02:26.594484] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd59430) on tqpair(0xcd8e10): expected_datao=0, payload_size=512 00:22:19.585 [2024-05-12 07:02:26.594494] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.585 [2024-05-12 07:02:26.594501] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.585 [2024-05-12 07:02:26.594509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:19.585 [2024-05-12 07:02:26.594518] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:19.585 [2024-05-12 07:02:26.594525] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:19.585 [2024-05-12 07:02:26.594531] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8e10): datao=0, datal=4096, cccid=7 00:22:19.585 [2024-05-12 07:02:26.594538] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd59590) on tqpair(0xcd8e10): expected_datao=0, payload_size=4096 00:22:19.585 [2024-05-12 07:02:26.594549] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:19.585 [2024-05-12 07:02:26.594560] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:19.585 [2024-05-12 07:02:26.594572] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.585 [2024-05-12 07:02:26.594582] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.585 [2024-05-12 07:02:26.594589] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.585 [2024-05-12 07:02:26.594595] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd592d0) on tqpair=0xcd8e10 00:22:19.585 [2024-05-12 07:02:26.594615] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.585 [2024-05-12 07:02:26.594626] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.585 [2024-05-12 07:02:26.594633] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.585 [2024-05-12 07:02:26.594640] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59170) on tqpair=0xcd8e10 00:22:19.585 [2024-05-12 07:02:26.594654] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.585 [2024-05-12 07:02:26.594664] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.585 [2024-05-12 07:02:26.594671] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.585 [2024-05-12 07:02:26.594677] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59430) on tqpair=0xcd8e10 00:22:19.585 [2024-05-12 07:02:26.594688] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.585 [2024-05-12 07:02:26.594710] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.585 [2024-05-12 07:02:26.594718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.585 [2024-05-12 07:02:26.594725] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59590) on tqpair=0xcd8e10 00:22:19.585 ===================================================== 00:22:19.585 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.585 ===================================================== 00:22:19.585 Controller Capabilities/Features 00:22:19.585 ================================ 00:22:19.585 Vendor ID: 8086 00:22:19.585 Subsystem Vendor ID: 8086 00:22:19.585 Serial Number: SPDK00000000000001 00:22:19.585 Model Number: SPDK bdev Controller 00:22:19.585 Firmware Version: 24.01.1 00:22:19.585 Recommended Arb Burst: 6 00:22:19.585 IEEE OUI Identifier: e4 d2 5c 00:22:19.585 Multi-path I/O 00:22:19.585 May have multiple subsystem ports: Yes 00:22:19.585 May have multiple controllers: Yes 00:22:19.585 Associated with SR-IOV VF: No 00:22:19.585 Max Data Transfer Size: 131072 00:22:19.585 Max Number of Namespaces: 32 00:22:19.585 Max Number of I/O Queues: 127 00:22:19.585 NVMe Specification Version (VS): 1.3 00:22:19.585 NVMe Specification Version (Identify): 1.3 00:22:19.585 Maximum Queue Entries: 128 00:22:19.585 Contiguous Queues Required: Yes 00:22:19.585 Arbitration Mechanisms Supported 00:22:19.585 Weighted Round Robin: Not Supported 00:22:19.585 Vendor Specific: Not Supported 00:22:19.585 Reset Timeout: 15000 ms 00:22:19.585 Doorbell Stride: 4 bytes 00:22:19.585 NVM Subsystem Reset: Not Supported 00:22:19.585 Command Sets Supported 00:22:19.585 NVM Command Set: Supported 00:22:19.585 Boot Partition: Not Supported 00:22:19.585 Memory Page Size Minimum: 4096 bytes 00:22:19.585 Memory Page Size Maximum: 4096 bytes 00:22:19.585 Persistent Memory Region: Not Supported 00:22:19.585 Optional Asynchronous Events Supported 00:22:19.585 Namespace Attribute Notices: Supported 00:22:19.585 Firmware Activation Notices: Not Supported 00:22:19.585 ANA Change Notices: Not Supported 00:22:19.585 PLE Aggregate Log Change Notices: Not Supported 00:22:19.585 LBA Status Info Alert Notices: Not Supported 00:22:19.585 EGE Aggregate Log Change Notices: Not Supported 00:22:19.585 Normal NVM Subsystem Shutdown event: Not Supported 00:22:19.585 Zone Descriptor Change Notices: Not Supported 00:22:19.585 Discovery Log Change Notices: Not Supported 00:22:19.585 Controller Attributes 00:22:19.585 128-bit Host Identifier: Supported 00:22:19.585 Non-Operational Permissive Mode: Not Supported 00:22:19.585 NVM Sets: Not Supported 00:22:19.585 Read Recovery Levels: Not Supported 00:22:19.585 Endurance Groups: Not Supported 00:22:19.585 Predictable Latency Mode: Not Supported 00:22:19.585 Traffic Based Keep ALive: Not Supported 00:22:19.585 Namespace Granularity: Not Supported 00:22:19.585 SQ Associations: Not Supported 00:22:19.585 UUID List: Not Supported 00:22:19.585 Multi-Domain Subsystem: Not Supported 00:22:19.585 Fixed Capacity Management: Not Supported 00:22:19.585 Variable Capacity Management: Not Supported 00:22:19.585 Delete Endurance Group: Not Supported 00:22:19.585 Delete NVM Set: Not Supported 00:22:19.585 Extended LBA Formats Supported: Not Supported 00:22:19.585 Flexible Data Placement Supported: Not Supported 00:22:19.585 00:22:19.585 Controller Memory Buffer Support 00:22:19.585 ================================ 00:22:19.585 Supported: No 00:22:19.585 00:22:19.585 Persistent Memory Region Support 00:22:19.585 ================================ 00:22:19.585 Supported: No 00:22:19.585 00:22:19.585 Admin Command Set Attributes 00:22:19.585 ============================ 00:22:19.585 Security Send/Receive: Not Supported 00:22:19.585 Format NVM: Not Supported 00:22:19.585 Firmware Activate/Download: Not Supported 00:22:19.585 Namespace Management: Not Supported 00:22:19.585 Device Self-Test: Not Supported 00:22:19.585 Directives: Not Supported 00:22:19.585 NVMe-MI: Not Supported 00:22:19.585 Virtualization Management: Not Supported 00:22:19.585 Doorbell Buffer Config: Not Supported 00:22:19.585 Get LBA Status Capability: Not Supported 00:22:19.585 Command & Feature Lockdown Capability: Not Supported 00:22:19.585 Abort Command Limit: 4 00:22:19.585 Async Event Request Limit: 4 00:22:19.585 Number of Firmware Slots: N/A 00:22:19.585 Firmware Slot 1 Read-Only: N/A 00:22:19.585 Firmware Activation Without Reset: N/A 00:22:19.585 Multiple Update Detection Support: N/A 00:22:19.585 Firmware Update Granularity: No Information Provided 00:22:19.585 Per-Namespace SMART Log: No 00:22:19.585 Asymmetric Namespace Access Log Page: Not Supported 00:22:19.585 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:19.585 Command Effects Log Page: Supported 00:22:19.585 Get Log Page Extended Data: Supported 00:22:19.585 Telemetry Log Pages: Not Supported 00:22:19.585 Persistent Event Log Pages: Not Supported 00:22:19.585 Supported Log Pages Log Page: May Support 00:22:19.585 Commands Supported & Effects Log Page: Not Supported 00:22:19.585 Feature Identifiers & Effects Log Page:May Support 00:22:19.585 NVMe-MI Commands & Effects Log Page: May Support 00:22:19.585 Data Area 4 for Telemetry Log: Not Supported 00:22:19.585 Error Log Page Entries Supported: 128 00:22:19.585 Keep Alive: Supported 00:22:19.585 Keep Alive Granularity: 10000 ms 00:22:19.585 00:22:19.585 NVM Command Set Attributes 00:22:19.585 ========================== 00:22:19.585 Submission Queue Entry Size 00:22:19.585 Max: 64 00:22:19.585 Min: 64 00:22:19.585 Completion Queue Entry Size 00:22:19.585 Max: 16 00:22:19.585 Min: 16 00:22:19.585 Number of Namespaces: 32 00:22:19.585 Compare Command: Supported 00:22:19.585 Write Uncorrectable Command: Not Supported 00:22:19.585 Dataset Management Command: Supported 00:22:19.585 Write Zeroes Command: Supported 00:22:19.585 Set Features Save Field: Not Supported 00:22:19.585 Reservations: Supported 00:22:19.585 Timestamp: Not Supported 00:22:19.585 Copy: Supported 00:22:19.585 Volatile Write Cache: Present 00:22:19.585 Atomic Write Unit (Normal): 1 00:22:19.585 Atomic Write Unit (PFail): 1 00:22:19.585 Atomic Compare & Write Unit: 1 00:22:19.585 Fused Compare & Write: Supported 00:22:19.585 Scatter-Gather List 00:22:19.585 SGL Command Set: Supported 00:22:19.585 SGL Keyed: Supported 00:22:19.585 SGL Bit Bucket Descriptor: Not Supported 00:22:19.585 SGL Metadata Pointer: Not Supported 00:22:19.585 Oversized SGL: Not Supported 00:22:19.585 SGL Metadata Address: Not Supported 00:22:19.585 SGL Offset: Supported 00:22:19.585 Transport SGL Data Block: Not Supported 00:22:19.585 Replay Protected Memory Block: Not Supported 00:22:19.585 00:22:19.585 Firmware Slot Information 00:22:19.585 ========================= 00:22:19.585 Active slot: 1 00:22:19.585 Slot 1 Firmware Revision: 24.01.1 00:22:19.585 00:22:19.585 00:22:19.585 Commands Supported and Effects 00:22:19.586 ============================== 00:22:19.586 Admin Commands 00:22:19.586 -------------- 00:22:19.586 Get Log Page (02h): Supported 00:22:19.586 Identify (06h): Supported 00:22:19.586 Abort (08h): Supported 00:22:19.586 Set Features (09h): Supported 00:22:19.586 Get Features (0Ah): Supported 00:22:19.586 Asynchronous Event Request (0Ch): Supported 00:22:19.586 Keep Alive (18h): Supported 00:22:19.586 I/O Commands 00:22:19.586 ------------ 00:22:19.586 Flush (00h): Supported LBA-Change 00:22:19.586 Write (01h): Supported LBA-Change 00:22:19.586 Read (02h): Supported 00:22:19.586 Compare (05h): Supported 00:22:19.586 Write Zeroes (08h): Supported LBA-Change 00:22:19.586 Dataset Management (09h): Supported LBA-Change 00:22:19.586 Copy (19h): Supported LBA-Change 00:22:19.586 Unknown (79h): Supported LBA-Change 00:22:19.586 Unknown (7Ah): Supported 00:22:19.586 00:22:19.586 Error Log 00:22:19.586 ========= 00:22:19.586 00:22:19.586 Arbitration 00:22:19.586 =========== 00:22:19.586 Arbitration Burst: 1 00:22:19.586 00:22:19.586 Power Management 00:22:19.586 ================ 00:22:19.586 Number of Power States: 1 00:22:19.586 Current Power State: Power State #0 00:22:19.586 Power State #0: 00:22:19.586 Max Power: 0.00 W 00:22:19.586 Non-Operational State: Operational 00:22:19.586 Entry Latency: Not Reported 00:22:19.586 Exit Latency: Not Reported 00:22:19.586 Relative Read Throughput: 0 00:22:19.586 Relative Read Latency: 0 00:22:19.586 Relative Write Throughput: 0 00:22:19.586 Relative Write Latency: 0 00:22:19.586 Idle Power: Not Reported 00:22:19.586 Active Power: Not Reported 00:22:19.586 Non-Operational Permissive Mode: Not Supported 00:22:19.586 00:22:19.586 Health Information 00:22:19.586 ================== 00:22:19.586 Critical Warnings: 00:22:19.586 Available Spare Space: OK 00:22:19.586 Temperature: OK 00:22:19.586 Device Reliability: OK 00:22:19.586 Read Only: No 00:22:19.586 Volatile Memory Backup: OK 00:22:19.586 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:19.586 Temperature Threshold: [2024-05-12 07:02:26.594845] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.594857] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.594864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcd8e10) 00:22:19.586 [2024-05-12 07:02:26.594874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.586 [2024-05-12 07:02:26.594896] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59590, cid 7, qid 0 00:22:19.586 [2024-05-12 07:02:26.595095] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.586 [2024-05-12 07:02:26.595110] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.586 [2024-05-12 07:02:26.595117] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.595124] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59590) on tqpair=0xcd8e10 00:22:19.586 [2024-05-12 07:02:26.595168] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:19.586 [2024-05-12 07:02:26.595189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.586 [2024-05-12 07:02:26.595201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.586 [2024-05-12 07:02:26.595211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.586 [2024-05-12 07:02:26.595221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.586 [2024-05-12 07:02:26.595233] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.595241] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.595247] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.586 [2024-05-12 07:02:26.595258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.586 [2024-05-12 07:02:26.595279] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.586 [2024-05-12 07:02:26.595452] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.586 [2024-05-12 07:02:26.595467] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.586 [2024-05-12 07:02:26.595474] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.595481] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59010) on tqpair=0xcd8e10 00:22:19.586 [2024-05-12 07:02:26.595492] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.595500] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.595506] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.586 [2024-05-12 07:02:26.595517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.586 [2024-05-12 07:02:26.595543] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.586 [2024-05-12 07:02:26.595702] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.586 [2024-05-12 07:02:26.595715] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.586 [2024-05-12 07:02:26.595722] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.595729] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59010) on tqpair=0xcd8e10 00:22:19.586 [2024-05-12 07:02:26.595737] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:19.586 [2024-05-12 07:02:26.595745] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:19.586 [2024-05-12 07:02:26.595760] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.595769] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.595776] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.586 [2024-05-12 07:02:26.595786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.586 [2024-05-12 07:02:26.595806] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.586 [2024-05-12 07:02:26.595976] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.586 [2024-05-12 07:02:26.595988] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.586 [2024-05-12 07:02:26.595995] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596002] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59010) on tqpair=0xcd8e10 00:22:19.586 [2024-05-12 07:02:26.596018] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596027] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596034] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.586 [2024-05-12 07:02:26.596044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.586 [2024-05-12 07:02:26.596064] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.586 [2024-05-12 07:02:26.596211] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.586 [2024-05-12 07:02:26.596226] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.586 [2024-05-12 07:02:26.596233] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596240] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59010) on tqpair=0xcd8e10 00:22:19.586 [2024-05-12 07:02:26.596257] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596265] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596272] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.586 [2024-05-12 07:02:26.596282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.586 [2024-05-12 07:02:26.596306] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.586 [2024-05-12 07:02:26.596451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.586 [2024-05-12 07:02:26.596466] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.586 [2024-05-12 07:02:26.596472] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596479] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59010) on tqpair=0xcd8e10 00:22:19.586 [2024-05-12 07:02:26.596496] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596505] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596512] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.586 [2024-05-12 07:02:26.596522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.586 [2024-05-12 07:02:26.596542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.586 [2024-05-12 07:02:26.596685] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.586 [2024-05-12 07:02:26.596707] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.586 [2024-05-12 07:02:26.596715] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596722] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59010) on tqpair=0xcd8e10 00:22:19.586 [2024-05-12 07:02:26.596739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596754] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.586 [2024-05-12 07:02:26.596765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.586 [2024-05-12 07:02:26.596785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.586 [2024-05-12 07:02:26.596921] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.586 [2024-05-12 07:02:26.596933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.586 [2024-05-12 07:02:26.596940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596947] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59010) on tqpair=0xcd8e10 00:22:19.586 [2024-05-12 07:02:26.596963] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596972] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.586 [2024-05-12 07:02:26.596978] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.587 [2024-05-12 07:02:26.596988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.587 [2024-05-12 07:02:26.597008] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.587 [2024-05-12 07:02:26.597145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.587 [2024-05-12 07:02:26.597160] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.587 [2024-05-12 07:02:26.597167] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.597173] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59010) on tqpair=0xcd8e10 00:22:19.587 [2024-05-12 07:02:26.597190] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.597199] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.597205] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.587 [2024-05-12 07:02:26.597216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.587 [2024-05-12 07:02:26.597236] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.587 [2024-05-12 07:02:26.597379] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.587 [2024-05-12 07:02:26.597394] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.587 [2024-05-12 07:02:26.597400] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.597407] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59010) on tqpair=0xcd8e10 00:22:19.587 [2024-05-12 07:02:26.597424] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.597433] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.597439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.587 [2024-05-12 07:02:26.597450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.587 [2024-05-12 07:02:26.597470] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.587 [2024-05-12 07:02:26.597609] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.587 [2024-05-12 07:02:26.597621] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.587 [2024-05-12 07:02:26.597627] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.597634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59010) on tqpair=0xcd8e10 00:22:19.587 [2024-05-12 07:02:26.597650] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.597659] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.597665] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.587 [2024-05-12 07:02:26.597676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.587 [2024-05-12 07:02:26.601702] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.587 [2024-05-12 07:02:26.601738] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.587 [2024-05-12 07:02:26.601748] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.587 [2024-05-12 07:02:26.601755] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.601761] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59010) on tqpair=0xcd8e10 00:22:19.587 [2024-05-12 07:02:26.601793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.601803] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.601810] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8e10) 00:22:19.587 [2024-05-12 07:02:26.601821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.587 [2024-05-12 07:02:26.601843] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd59010, cid 3, qid 0 00:22:19.587 [2024-05-12 07:02:26.602018] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:19.587 [2024-05-12 07:02:26.602031] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:19.587 [2024-05-12 07:02:26.602037] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:19.587 [2024-05-12 07:02:26.602044] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd59010) on tqpair=0xcd8e10 00:22:19.587 [2024-05-12 07:02:26.602057] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:19.587 0 Kelvin (-273 Celsius) 00:22:19.587 Available Spare: 0% 00:22:19.587 Available Spare Threshold: 0% 00:22:19.587 Life Percentage Used: 0% 00:22:19.587 Data Units Read: 0 00:22:19.587 Data Units Written: 0 00:22:19.587 Host Read Commands: 0 00:22:19.587 Host Write Commands: 0 00:22:19.587 Controller Busy Time: 0 minutes 00:22:19.587 Power Cycles: 0 00:22:19.587 Power On Hours: 0 hours 00:22:19.587 Unsafe Shutdowns: 0 00:22:19.587 Unrecoverable Media Errors: 0 00:22:19.587 Lifetime Error Log Entries: 0 00:22:19.587 Warning Temperature Time: 0 minutes 00:22:19.587 Critical Temperature Time: 0 minutes 00:22:19.587 00:22:19.587 Number of Queues 00:22:19.587 ================ 00:22:19.587 Number of I/O Submission Queues: 127 00:22:19.587 Number of I/O Completion Queues: 127 00:22:19.587 00:22:19.587 Active Namespaces 00:22:19.587 ================= 00:22:19.587 Namespace ID:1 00:22:19.587 Error Recovery Timeout: Unlimited 00:22:19.587 Command Set Identifier: NVM (00h) 00:22:19.587 Deallocate: Supported 00:22:19.587 Deallocated/Unwritten Error: Not Supported 00:22:19.587 Deallocated Read Value: Unknown 00:22:19.587 Deallocate in Write Zeroes: Not Supported 00:22:19.587 Deallocated Guard Field: 0xFFFF 00:22:19.587 Flush: Supported 00:22:19.587 Reservation: Supported 00:22:19.587 Namespace Sharing Capabilities: Multiple Controllers 00:22:19.587 Size (in LBAs): 131072 (0GiB) 00:22:19.587 Capacity (in LBAs): 131072 (0GiB) 00:22:19.587 Utilization (in LBAs): 131072 (0GiB) 00:22:19.587 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:19.587 EUI64: ABCDEF0123456789 00:22:19.587 UUID: 14777187-5d7c-4b58-aaf8-f8c582e241ea 00:22:19.587 Thin Provisioning: Not Supported 00:22:19.587 Per-NS Atomic Units: Yes 00:22:19.587 Atomic Boundary Size (Normal): 0 00:22:19.587 Atomic Boundary Size (PFail): 0 00:22:19.587 Atomic Boundary Offset: 0 00:22:19.587 Maximum Single Source Range Length: 65535 00:22:19.587 Maximum Copy Length: 65535 00:22:19.587 Maximum Source Range Count: 1 00:22:19.587 NGUID/EUI64 Never Reused: No 00:22:19.587 Namespace Write Protected: No 00:22:19.587 Number of LBA Formats: 1 00:22:19.587 Current LBA Format: LBA Format #00 00:22:19.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:19.587 00:22:19.587 07:02:26 -- host/identify.sh@51 -- # sync 00:22:19.587 07:02:26 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:19.587 07:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.587 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:19.587 07:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.587 07:02:26 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:19.587 07:02:26 -- host/identify.sh@56 -- # nvmftestfini 00:22:19.587 07:02:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:19.587 07:02:26 -- nvmf/common.sh@116 -- # sync 00:22:19.587 07:02:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:19.587 07:02:26 -- nvmf/common.sh@119 -- # set +e 00:22:19.587 07:02:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:19.587 07:02:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:19.587 rmmod nvme_tcp 00:22:19.587 rmmod nvme_fabrics 00:22:19.587 rmmod nvme_keyring 00:22:19.587 07:02:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:19.587 07:02:26 -- nvmf/common.sh@123 -- # set -e 00:22:19.587 07:02:26 -- nvmf/common.sh@124 -- # return 0 00:22:19.587 07:02:26 -- nvmf/common.sh@477 -- # '[' -n 3102195 ']' 00:22:19.587 07:02:26 -- nvmf/common.sh@478 -- # killprocess 3102195 00:22:19.587 07:02:26 -- common/autotest_common.sh@926 -- # '[' -z 3102195 ']' 00:22:19.587 07:02:26 -- common/autotest_common.sh@930 -- # kill -0 3102195 00:22:19.587 07:02:26 -- common/autotest_common.sh@931 -- # uname 00:22:19.587 07:02:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:19.587 07:02:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3102195 00:22:19.587 07:02:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:19.587 07:02:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:19.587 07:02:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3102195' 00:22:19.587 killing process with pid 3102195 00:22:19.587 07:02:26 -- common/autotest_common.sh@945 -- # kill 3102195 00:22:19.587 [2024-05-12 07:02:26.686913] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:19.587 07:02:26 -- common/autotest_common.sh@950 -- # wait 3102195 00:22:19.847 07:02:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:19.847 07:02:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:19.847 07:02:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:19.847 07:02:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:19.847 07:02:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:19.847 07:02:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.847 07:02:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.847 07:02:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.380 07:02:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:22.380 00:22:22.380 real 0m6.061s 00:22:22.380 user 0m7.324s 00:22:22.380 sys 0m1.843s 00:22:22.380 07:02:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:22.380 07:02:29 -- common/autotest_common.sh@10 -- # set +x 00:22:22.380 ************************************ 00:22:22.380 END TEST nvmf_identify 00:22:22.380 ************************************ 00:22:22.380 07:02:29 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:22.380 07:02:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:22.380 07:02:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:22.380 07:02:29 -- common/autotest_common.sh@10 -- # set +x 00:22:22.380 ************************************ 00:22:22.380 START TEST nvmf_perf 00:22:22.380 ************************************ 00:22:22.380 07:02:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:22.380 * Looking for test storage... 00:22:22.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:22.380 07:02:29 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.380 07:02:29 -- nvmf/common.sh@7 -- # uname -s 00:22:22.380 07:02:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.380 07:02:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.380 07:02:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.380 07:02:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.380 07:02:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.380 07:02:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.380 07:02:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.380 07:02:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.380 07:02:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.380 07:02:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.380 07:02:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.380 07:02:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.380 07:02:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.380 07:02:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.380 07:02:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.380 07:02:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.380 07:02:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.380 07:02:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.380 07:02:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.380 07:02:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.380 07:02:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.380 07:02:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.380 07:02:29 -- paths/export.sh@5 -- # export PATH 00:22:22.380 07:02:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.380 07:02:29 -- nvmf/common.sh@46 -- # : 0 00:22:22.380 07:02:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:22.380 07:02:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:22.380 07:02:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:22.380 07:02:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.380 07:02:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.380 07:02:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:22.380 07:02:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:22.380 07:02:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:22.380 07:02:29 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:22.380 07:02:29 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:22.380 07:02:29 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:22.380 07:02:29 -- host/perf.sh@17 -- # nvmftestinit 00:22:22.380 07:02:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:22.380 07:02:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.380 07:02:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:22.380 07:02:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:22.380 07:02:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:22.380 07:02:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.380 07:02:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.380 07:02:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.380 07:02:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:22.380 07:02:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:22.380 07:02:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:22.380 07:02:29 -- common/autotest_common.sh@10 -- # set +x 00:22:24.287 07:02:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:24.287 07:02:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:24.287 07:02:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:24.287 07:02:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:24.287 07:02:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:24.287 07:02:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:24.287 07:02:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:24.287 07:02:31 -- nvmf/common.sh@294 -- # net_devs=() 00:22:24.287 07:02:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:24.287 07:02:31 -- nvmf/common.sh@295 -- # e810=() 00:22:24.287 07:02:31 -- nvmf/common.sh@295 -- # local -ga e810 00:22:24.287 07:02:31 -- nvmf/common.sh@296 -- # x722=() 00:22:24.287 07:02:31 -- nvmf/common.sh@296 -- # local -ga x722 00:22:24.287 07:02:31 -- nvmf/common.sh@297 -- # mlx=() 00:22:24.287 07:02:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:24.287 07:02:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.287 07:02:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.287 07:02:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.287 07:02:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.287 07:02:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.287 07:02:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.287 07:02:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.287 07:02:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.287 07:02:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.287 07:02:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.287 07:02:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.287 07:02:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:24.287 07:02:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:24.287 07:02:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:24.287 07:02:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:24.287 07:02:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:24.287 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:24.287 07:02:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:24.287 07:02:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:24.287 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:24.287 07:02:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:24.287 07:02:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:24.287 07:02:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.287 07:02:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:24.287 07:02:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.287 07:02:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:24.287 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:24.287 07:02:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.287 07:02:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:24.287 07:02:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.287 07:02:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:24.287 07:02:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.287 07:02:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:24.287 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:24.287 07:02:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.287 07:02:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:24.287 07:02:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:24.287 07:02:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:24.287 07:02:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.287 07:02:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.287 07:02:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.287 07:02:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:24.287 07:02:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.287 07:02:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.287 07:02:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:24.287 07:02:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.287 07:02:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.287 07:02:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:24.287 07:02:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:24.287 07:02:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.287 07:02:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.287 07:02:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.287 07:02:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.287 07:02:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:24.287 07:02:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.287 07:02:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.287 07:02:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.287 07:02:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:24.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:22:24.287 00:22:24.287 --- 10.0.0.2 ping statistics --- 00:22:24.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.287 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:24.287 07:02:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:22:24.287 00:22:24.287 --- 10.0.0.1 ping statistics --- 00:22:24.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.287 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:22:24.287 07:02:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.287 07:02:31 -- nvmf/common.sh@410 -- # return 0 00:22:24.287 07:02:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:24.287 07:02:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.287 07:02:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:24.287 07:02:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.287 07:02:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:24.287 07:02:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:24.287 07:02:31 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:24.287 07:02:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:24.287 07:02:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:24.287 07:02:31 -- common/autotest_common.sh@10 -- # set +x 00:22:24.287 07:02:31 -- nvmf/common.sh@469 -- # nvmfpid=3104422 00:22:24.287 07:02:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:24.287 07:02:31 -- nvmf/common.sh@470 -- # waitforlisten 3104422 00:22:24.287 07:02:31 -- common/autotest_common.sh@819 -- # '[' -z 3104422 ']' 00:22:24.287 07:02:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.287 07:02:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:24.287 07:02:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.287 07:02:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:24.287 07:02:31 -- common/autotest_common.sh@10 -- # set +x 00:22:24.287 [2024-05-12 07:02:31.273160] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:24.287 [2024-05-12 07:02:31.273241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.287 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.287 [2024-05-12 07:02:31.342551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.565 [2024-05-12 07:02:31.458055] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:24.565 [2024-05-12 07:02:31.458216] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.565 [2024-05-12 07:02:31.458234] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.565 [2024-05-12 07:02:31.458248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.565 [2024-05-12 07:02:31.458332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.565 [2024-05-12 07:02:31.458403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.565 [2024-05-12 07:02:31.458433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.565 [2024-05-12 07:02:31.458436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.130 07:02:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:25.130 07:02:32 -- common/autotest_common.sh@852 -- # return 0 00:22:25.130 07:02:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:25.130 07:02:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:25.130 07:02:32 -- common/autotest_common.sh@10 -- # set +x 00:22:25.130 07:02:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.130 07:02:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:25.130 07:02:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:28.405 07:02:35 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:28.405 07:02:35 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:28.663 07:02:35 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:22:28.663 07:02:35 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:28.921 07:02:35 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:28.921 07:02:35 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:22:28.921 07:02:35 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:28.921 07:02:35 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:28.921 07:02:35 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:29.179 [2024-05-12 07:02:36.054396] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.179 07:02:36 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.436 07:02:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:29.436 07:02:36 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:29.436 07:02:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:29.436 07:02:36 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:29.693 07:02:36 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.948 [2024-05-12 07:02:37.030154] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.949 07:02:37 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:30.204 07:02:37 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:22:30.204 07:02:37 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:30.204 07:02:37 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:30.204 07:02:37 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:31.573 Initializing NVMe Controllers 00:22:31.573 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:22:31.573 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:22:31.573 Initialization complete. Launching workers. 00:22:31.573 ======================================================== 00:22:31.573 Latency(us) 00:22:31.573 Device Information : IOPS MiB/s Average min max 00:22:31.573 PCIE (0000:88:00.0) NSID 1 from core 0: 86590.21 338.24 369.06 16.60 5260.64 00:22:31.573 ======================================================== 00:22:31.573 Total : 86590.21 338.24 369.06 16.60 5260.64 00:22:31.573 00:22:31.573 07:02:38 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:31.573 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.945 Initializing NVMe Controllers 00:22:32.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:32.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:32.945 Initialization complete. Launching workers. 00:22:32.945 ======================================================== 00:22:32.945 Latency(us) 00:22:32.945 Device Information : IOPS MiB/s Average min max 00:22:32.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 98.00 0.38 10434.82 211.02 45817.75 00:22:32.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.00 0.21 18844.01 7949.33 47900.54 00:22:32.945 ======================================================== 00:22:32.945 Total : 153.00 0.60 13457.73 211.02 47900.54 00:22:32.945 00:22:32.945 07:02:39 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:32.945 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.319 Initializing NVMe Controllers 00:22:34.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:34.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:34.319 Initialization complete. Launching workers. 00:22:34.319 ======================================================== 00:22:34.319 Latency(us) 00:22:34.319 Device Information : IOPS MiB/s Average min max 00:22:34.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8486.76 33.15 3781.74 541.86 7831.57 00:22:34.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3989.42 15.58 8057.48 6920.77 15365.60 00:22:34.319 ======================================================== 00:22:34.319 Total : 12476.18 48.74 5148.96 541.86 15365.60 00:22:34.319 00:22:34.319 07:02:41 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:34.319 07:02:41 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:34.319 07:02:41 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:34.319 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.850 Initializing NVMe Controllers 00:22:36.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.850 Controller IO queue size 128, less than required. 00:22:36.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.850 Controller IO queue size 128, less than required. 00:22:36.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:36.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:36.851 Initialization complete. Launching workers. 00:22:36.851 ======================================================== 00:22:36.851 Latency(us) 00:22:36.851 Device Information : IOPS MiB/s Average min max 00:22:36.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 904.98 226.24 146028.24 88356.67 198114.88 00:22:36.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 598.65 149.66 223353.03 85866.66 345719.99 00:22:36.851 ======================================================== 00:22:36.851 Total : 1503.63 375.91 176814.25 85866.66 345719.99 00:22:36.851 00:22:36.851 07:02:43 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:36.851 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.851 No valid NVMe controllers or AIO or URING devices found 00:22:36.851 Initializing NVMe Controllers 00:22:36.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.851 Controller IO queue size 128, less than required. 00:22:36.851 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.851 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:36.851 Controller IO queue size 128, less than required. 00:22:36.851 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:36.851 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:36.851 WARNING: Some requested NVMe devices were skipped 00:22:36.851 07:02:43 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:36.851 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.132 Initializing NVMe Controllers 00:22:40.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.132 Controller IO queue size 128, less than required. 00:22:40.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.132 Controller IO queue size 128, less than required. 00:22:40.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:40.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:40.132 Initialization complete. Launching workers. 00:22:40.132 00:22:40.132 ==================== 00:22:40.132 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:40.132 TCP transport: 00:22:40.132 polls: 33267 00:22:40.132 idle_polls: 9802 00:22:40.132 sock_completions: 23465 00:22:40.132 nvme_completions: 3523 00:22:40.132 submitted_requests: 5409 00:22:40.132 queued_requests: 1 00:22:40.132 00:22:40.132 ==================== 00:22:40.132 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:40.132 TCP transport: 00:22:40.132 polls: 36344 00:22:40.132 idle_polls: 13629 00:22:40.132 sock_completions: 22715 00:22:40.132 nvme_completions: 2948 00:22:40.132 submitted_requests: 4559 00:22:40.132 queued_requests: 1 00:22:40.132 ======================================================== 00:22:40.132 Latency(us) 00:22:40.132 Device Information : IOPS MiB/s Average min max 00:22:40.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 944.46 236.12 140229.83 77876.36 208230.81 00:22:40.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 800.47 200.12 165217.07 72195.61 248892.56 00:22:40.132 ======================================================== 00:22:40.132 Total : 1744.93 436.23 151692.46 72195.61 248892.56 00:22:40.132 00:22:40.132 07:02:46 -- host/perf.sh@66 -- # sync 00:22:40.132 07:02:46 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.132 07:02:47 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:22:40.132 07:02:47 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:22:40.132 07:02:47 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:22:43.411 07:02:50 -- host/perf.sh@72 -- # ls_guid=82b6bb88-b9af-4c30-a26e-16b462826d5e 00:22:43.411 07:02:50 -- host/perf.sh@73 -- # get_lvs_free_mb 82b6bb88-b9af-4c30-a26e-16b462826d5e 00:22:43.411 07:02:50 -- common/autotest_common.sh@1343 -- # local lvs_uuid=82b6bb88-b9af-4c30-a26e-16b462826d5e 00:22:43.411 07:02:50 -- common/autotest_common.sh@1344 -- # local lvs_info 00:22:43.411 07:02:50 -- common/autotest_common.sh@1345 -- # local fc 00:22:43.411 07:02:50 -- common/autotest_common.sh@1346 -- # local cs 00:22:43.411 07:02:50 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:43.411 07:02:50 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:22:43.411 { 00:22:43.411 "uuid": "82b6bb88-b9af-4c30-a26e-16b462826d5e", 00:22:43.411 "name": "lvs_0", 00:22:43.411 "base_bdev": "Nvme0n1", 00:22:43.411 "total_data_clusters": 238234, 00:22:43.411 "free_clusters": 238234, 00:22:43.411 "block_size": 512, 00:22:43.411 "cluster_size": 4194304 00:22:43.411 } 00:22:43.411 ]' 00:22:43.411 07:02:50 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="82b6bb88-b9af-4c30-a26e-16b462826d5e") .free_clusters' 00:22:43.411 07:02:50 -- common/autotest_common.sh@1348 -- # fc=238234 00:22:43.411 07:02:50 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="82b6bb88-b9af-4c30-a26e-16b462826d5e") .cluster_size' 00:22:43.685 07:02:50 -- common/autotest_common.sh@1349 -- # cs=4194304 00:22:43.685 07:02:50 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:22:43.685 07:02:50 -- common/autotest_common.sh@1353 -- # echo 952936 00:22:43.685 952936 00:22:43.685 07:02:50 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:22:43.685 07:02:50 -- host/perf.sh@78 -- # free_mb=20480 00:22:43.685 07:02:50 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 82b6bb88-b9af-4c30-a26e-16b462826d5e lbd_0 20480 00:22:44.264 07:02:51 -- host/perf.sh@80 -- # lb_guid=c5b56fc7-675b-4914-a2b5-81f0895c17e8 00:22:44.264 07:02:51 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore c5b56fc7-675b-4914-a2b5-81f0895c17e8 lvs_n_0 00:22:44.828 07:02:51 -- host/perf.sh@83 -- # ls_nested_guid=c10a1ca0-bcdc-4043-a4f0-9bcc036b83d7 00:22:44.828 07:02:51 -- host/perf.sh@84 -- # get_lvs_free_mb c10a1ca0-bcdc-4043-a4f0-9bcc036b83d7 00:22:44.828 07:02:51 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c10a1ca0-bcdc-4043-a4f0-9bcc036b83d7 00:22:44.828 07:02:51 -- common/autotest_common.sh@1344 -- # local lvs_info 00:22:44.828 07:02:51 -- common/autotest_common.sh@1345 -- # local fc 00:22:44.828 07:02:51 -- common/autotest_common.sh@1346 -- # local cs 00:22:44.828 07:02:51 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:45.085 07:02:52 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:22:45.085 { 00:22:45.085 "uuid": "82b6bb88-b9af-4c30-a26e-16b462826d5e", 00:22:45.085 "name": "lvs_0", 00:22:45.085 "base_bdev": "Nvme0n1", 00:22:45.085 "total_data_clusters": 238234, 00:22:45.085 "free_clusters": 233114, 00:22:45.085 "block_size": 512, 00:22:45.085 "cluster_size": 4194304 00:22:45.085 }, 00:22:45.085 { 00:22:45.085 "uuid": "c10a1ca0-bcdc-4043-a4f0-9bcc036b83d7", 00:22:45.085 "name": "lvs_n_0", 00:22:45.085 "base_bdev": "c5b56fc7-675b-4914-a2b5-81f0895c17e8", 00:22:45.085 "total_data_clusters": 5114, 00:22:45.085 "free_clusters": 5114, 00:22:45.085 "block_size": 512, 00:22:45.085 "cluster_size": 4194304 00:22:45.085 } 00:22:45.085 ]' 00:22:45.085 07:02:52 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c10a1ca0-bcdc-4043-a4f0-9bcc036b83d7") .free_clusters' 00:22:45.085 07:02:52 -- common/autotest_common.sh@1348 -- # fc=5114 00:22:45.085 07:02:52 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c10a1ca0-bcdc-4043-a4f0-9bcc036b83d7") .cluster_size' 00:22:45.341 07:02:52 -- common/autotest_common.sh@1349 -- # cs=4194304 00:22:45.341 07:02:52 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:22:45.341 07:02:52 -- common/autotest_common.sh@1353 -- # echo 20456 00:22:45.341 20456 00:22:45.341 07:02:52 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:22:45.341 07:02:52 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c10a1ca0-bcdc-4043-a4f0-9bcc036b83d7 lbd_nest_0 20456 00:22:45.597 07:02:52 -- host/perf.sh@88 -- # lb_nested_guid=a688ae93-1a94-4447-a9f7-281e964a1ccf 00:22:45.597 07:02:52 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.597 07:02:52 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:22:45.854 07:02:52 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a688ae93-1a94-4447-a9f7-281e964a1ccf 00:22:45.854 07:02:52 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.111 07:02:53 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:22:46.111 07:02:53 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:22:46.111 07:02:53 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:22:46.111 07:02:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:46.111 07:02:53 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:46.368 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.559 Initializing NVMe Controllers 00:22:58.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:58.559 Initialization complete. Launching workers. 00:22:58.559 ======================================================== 00:22:58.559 Latency(us) 00:22:58.559 Device Information : IOPS MiB/s Average min max 00:22:58.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.90 0.02 20523.85 266.26 48143.50 00:22:58.559 ======================================================== 00:22:58.559 Total : 48.90 0.02 20523.85 266.26 48143.50 00:22:58.559 00:22:58.559 07:03:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:58.559 07:03:03 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:58.559 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.520 Initializing NVMe Controllers 00:23:08.520 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:08.520 Initialization complete. Launching workers. 00:23:08.520 ======================================================== 00:23:08.520 Latency(us) 00:23:08.520 Device Information : IOPS MiB/s Average min max 00:23:08.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.18 10.15 12327.69 3990.62 47887.83 00:23:08.520 ======================================================== 00:23:08.520 Total : 81.18 10.15 12327.69 3990.62 47887.83 00:23:08.520 00:23:08.520 07:03:13 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:08.520 07:03:13 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:08.520 07:03:13 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:08.520 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.486 Initializing NVMe Controllers 00:23:18.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:18.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:18.486 Initialization complete. Launching workers. 00:23:18.486 ======================================================== 00:23:18.486 Latency(us) 00:23:18.486 Device Information : IOPS MiB/s Average min max 00:23:18.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7279.90 3.55 4396.86 255.98 11957.55 00:23:18.486 ======================================================== 00:23:18.486 Total : 7279.90 3.55 4396.86 255.98 11957.55 00:23:18.486 00:23:18.486 07:03:24 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:18.486 07:03:24 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:18.486 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.501 Initializing NVMe Controllers 00:23:28.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:28.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:28.501 Initialization complete. Launching workers. 00:23:28.501 ======================================================== 00:23:28.501 Latency(us) 00:23:28.501 Device Information : IOPS MiB/s Average min max 00:23:28.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1747.49 218.44 18320.05 1328.14 40894.64 00:23:28.501 ======================================================== 00:23:28.501 Total : 1747.49 218.44 18320.05 1328.14 40894.64 00:23:28.501 00:23:28.501 07:03:34 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:28.501 07:03:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:28.501 07:03:34 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:28.501 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.459 Initializing NVMe Controllers 00:23:38.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:38.459 Controller IO queue size 128, less than required. 00:23:38.459 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:38.459 Initialization complete. Launching workers. 00:23:38.459 ======================================================== 00:23:38.459 Latency(us) 00:23:38.459 Device Information : IOPS MiB/s Average min max 00:23:38.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11940.40 5.83 10721.23 1874.42 25680.89 00:23:38.459 ======================================================== 00:23:38.459 Total : 11940.40 5.83 10721.23 1874.42 25680.89 00:23:38.459 00:23:38.459 07:03:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:38.459 07:03:44 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:38.459 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.424 Initializing NVMe Controllers 00:23:48.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.424 Controller IO queue size 128, less than required. 00:23:48.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:48.424 Initialization complete. Launching workers. 00:23:48.424 ======================================================== 00:23:48.424 Latency(us) 00:23:48.424 Device Information : IOPS MiB/s Average min max 00:23:48.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1180.90 147.61 108878.27 20379.70 225566.11 00:23:48.424 ======================================================== 00:23:48.424 Total : 1180.90 147.61 108878.27 20379.70 225566.11 00:23:48.424 00:23:48.424 07:03:55 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:48.424 07:03:55 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a688ae93-1a94-4447-a9f7-281e964a1ccf 00:23:48.988 07:03:56 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:49.244 07:03:56 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c5b56fc7-675b-4914-a2b5-81f0895c17e8 00:23:49.501 07:03:56 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:49.758 07:03:56 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:49.758 07:03:56 -- host/perf.sh@114 -- # nvmftestfini 00:23:49.758 07:03:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:49.758 07:03:56 -- nvmf/common.sh@116 -- # sync 00:23:49.758 07:03:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:49.758 07:03:56 -- nvmf/common.sh@119 -- # set +e 00:23:49.758 07:03:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:49.758 07:03:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:49.758 rmmod nvme_tcp 00:23:49.758 rmmod nvme_fabrics 00:23:49.758 rmmod nvme_keyring 00:23:50.015 07:03:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:50.015 07:03:56 -- nvmf/common.sh@123 -- # set -e 00:23:50.015 07:03:56 -- nvmf/common.sh@124 -- # return 0 00:23:50.015 07:03:56 -- nvmf/common.sh@477 -- # '[' -n 3104422 ']' 00:23:50.015 07:03:56 -- nvmf/common.sh@478 -- # killprocess 3104422 00:23:50.015 07:03:56 -- common/autotest_common.sh@926 -- # '[' -z 3104422 ']' 00:23:50.015 07:03:56 -- common/autotest_common.sh@930 -- # kill -0 3104422 00:23:50.015 07:03:56 -- common/autotest_common.sh@931 -- # uname 00:23:50.015 07:03:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:50.015 07:03:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3104422 00:23:50.015 07:03:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:50.015 07:03:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:50.015 07:03:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3104422' 00:23:50.015 killing process with pid 3104422 00:23:50.015 07:03:56 -- common/autotest_common.sh@945 -- # kill 3104422 00:23:50.015 07:03:56 -- common/autotest_common.sh@950 -- # wait 3104422 00:23:51.915 07:03:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:51.915 07:03:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:51.915 07:03:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:51.915 07:03:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:51.915 07:03:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:51.915 07:03:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.915 07:03:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.915 07:03:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.849 07:04:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:53.849 00:23:53.849 real 1m31.540s 00:23:53.849 user 5m37.360s 00:23:53.849 sys 0m15.655s 00:23:53.849 07:04:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:53.849 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:53.849 ************************************ 00:23:53.849 END TEST nvmf_perf 00:23:53.849 ************************************ 00:23:53.849 07:04:00 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:53.849 07:04:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:53.849 07:04:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:53.849 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:53.849 ************************************ 00:23:53.849 START TEST nvmf_fio_host 00:23:53.849 ************************************ 00:23:53.849 07:04:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:53.849 * Looking for test storage... 00:23:53.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:53.849 07:04:00 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.849 07:04:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.849 07:04:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.849 07:04:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.849 07:04:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.849 07:04:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.850 07:04:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.850 07:04:00 -- paths/export.sh@5 -- # export PATH 00:23:53.850 07:04:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.850 07:04:00 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.850 07:04:00 -- nvmf/common.sh@7 -- # uname -s 00:23:53.850 07:04:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.850 07:04:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.850 07:04:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.850 07:04:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.850 07:04:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.850 07:04:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.850 07:04:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.850 07:04:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.850 07:04:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.850 07:04:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.850 07:04:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:53.850 07:04:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:53.850 07:04:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.850 07:04:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.850 07:04:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.850 07:04:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.850 07:04:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.850 07:04:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.850 07:04:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.850 07:04:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.850 07:04:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.850 07:04:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.850 07:04:00 -- paths/export.sh@5 -- # export PATH 00:23:53.850 07:04:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.850 07:04:00 -- nvmf/common.sh@46 -- # : 0 00:23:53.850 07:04:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:53.850 07:04:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:53.850 07:04:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:53.850 07:04:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.850 07:04:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.850 07:04:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:53.850 07:04:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:53.850 07:04:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:53.850 07:04:00 -- host/fio.sh@12 -- # nvmftestinit 00:23:53.850 07:04:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:53.850 07:04:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.850 07:04:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:53.850 07:04:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:53.850 07:04:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:53.850 07:04:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.850 07:04:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.850 07:04:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.850 07:04:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:53.850 07:04:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:53.850 07:04:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:53.850 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:55.752 07:04:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:55.752 07:04:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:55.752 07:04:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:55.752 07:04:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:55.752 07:04:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:55.752 07:04:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:55.752 07:04:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:55.752 07:04:02 -- nvmf/common.sh@294 -- # net_devs=() 00:23:55.752 07:04:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:55.752 07:04:02 -- nvmf/common.sh@295 -- # e810=() 00:23:55.752 07:04:02 -- nvmf/common.sh@295 -- # local -ga e810 00:23:55.752 07:04:02 -- nvmf/common.sh@296 -- # x722=() 00:23:55.752 07:04:02 -- nvmf/common.sh@296 -- # local -ga x722 00:23:55.752 07:04:02 -- nvmf/common.sh@297 -- # mlx=() 00:23:55.752 07:04:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:55.752 07:04:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.752 07:04:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.752 07:04:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.752 07:04:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.752 07:04:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.752 07:04:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.752 07:04:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.752 07:04:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.752 07:04:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.752 07:04:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.752 07:04:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.752 07:04:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:55.752 07:04:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:55.752 07:04:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:55.752 07:04:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:55.752 07:04:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:55.752 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:55.752 07:04:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:55.752 07:04:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:55.752 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:55.752 07:04:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:55.752 07:04:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:55.752 07:04:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.752 07:04:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:55.752 07:04:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.752 07:04:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:55.752 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:55.752 07:04:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.752 07:04:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:55.752 07:04:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.752 07:04:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:55.752 07:04:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.752 07:04:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:55.752 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:55.752 07:04:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.752 07:04:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:55.752 07:04:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:55.752 07:04:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:55.752 07:04:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:55.752 07:04:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.752 07:04:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.752 07:04:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.752 07:04:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:55.752 07:04:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.752 07:04:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.752 07:04:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:55.752 07:04:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.752 07:04:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.752 07:04:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:55.753 07:04:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:55.753 07:04:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.753 07:04:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.753 07:04:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.753 07:04:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.753 07:04:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:55.753 07:04:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.753 07:04:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.753 07:04:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.753 07:04:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:55.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:23:55.753 00:23:55.753 --- 10.0.0.2 ping statistics --- 00:23:55.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.753 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:23:55.753 07:04:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:23:55.753 00:23:55.753 --- 10.0.0.1 ping statistics --- 00:23:55.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.753 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:55.753 07:04:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.753 07:04:02 -- nvmf/common.sh@410 -- # return 0 00:23:55.753 07:04:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:55.753 07:04:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.753 07:04:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:55.753 07:04:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:55.753 07:04:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.753 07:04:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:55.753 07:04:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:55.753 07:04:02 -- host/fio.sh@14 -- # [[ y != y ]] 00:23:55.753 07:04:02 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:23:55.753 07:04:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:55.753 07:04:02 -- common/autotest_common.sh@10 -- # set +x 00:23:55.753 07:04:02 -- host/fio.sh@22 -- # nvmfpid=3117359 00:23:55.753 07:04:02 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:55.753 07:04:02 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.753 07:04:02 -- host/fio.sh@26 -- # waitforlisten 3117359 00:23:55.753 07:04:02 -- common/autotest_common.sh@819 -- # '[' -z 3117359 ']' 00:23:55.753 07:04:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.753 07:04:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:55.753 07:04:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.753 07:04:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:55.753 07:04:02 -- common/autotest_common.sh@10 -- # set +x 00:23:55.753 [2024-05-12 07:04:02.780540] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:55.753 [2024-05-12 07:04:02.780609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.753 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.753 [2024-05-12 07:04:02.849882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:56.010 [2024-05-12 07:04:02.967237] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:56.010 [2024-05-12 07:04:02.967390] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.010 [2024-05-12 07:04:02.967408] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.010 [2024-05-12 07:04:02.967423] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.010 [2024-05-12 07:04:02.967513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.010 [2024-05-12 07:04:02.967584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.010 [2024-05-12 07:04:02.967616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.010 [2024-05-12 07:04:02.967618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.942 07:04:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:56.942 07:04:03 -- common/autotest_common.sh@852 -- # return 0 00:23:56.942 07:04:03 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:56.942 07:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.942 07:04:03 -- common/autotest_common.sh@10 -- # set +x 00:23:56.942 [2024-05-12 07:04:03.729109] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.942 07:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.942 07:04:03 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:23:56.942 07:04:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:56.942 07:04:03 -- common/autotest_common.sh@10 -- # set +x 00:23:56.942 07:04:03 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:56.942 07:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.942 07:04:03 -- common/autotest_common.sh@10 -- # set +x 00:23:56.942 Malloc1 00:23:56.942 07:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.942 07:04:03 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:56.942 07:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.942 07:04:03 -- common/autotest_common.sh@10 -- # set +x 00:23:56.942 07:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.942 07:04:03 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:56.942 07:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.942 07:04:03 -- common/autotest_common.sh@10 -- # set +x 00:23:56.942 07:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.942 07:04:03 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.942 07:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.942 07:04:03 -- common/autotest_common.sh@10 -- # set +x 00:23:56.942 [2024-05-12 07:04:03.799912] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.942 07:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.942 07:04:03 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:56.942 07:04:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.942 07:04:03 -- common/autotest_common.sh@10 -- # set +x 00:23:56.942 07:04:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.942 07:04:03 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:56.942 07:04:03 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:56.942 07:04:03 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:56.942 07:04:03 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:23:56.942 07:04:03 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:56.942 07:04:03 -- common/autotest_common.sh@1318 -- # local sanitizers 00:23:56.942 07:04:03 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:56.942 07:04:03 -- common/autotest_common.sh@1320 -- # shift 00:23:56.942 07:04:03 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:23:56.942 07:04:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.942 07:04:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:56.942 07:04:03 -- common/autotest_common.sh@1324 -- # grep libasan 00:23:56.942 07:04:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:23:56.942 07:04:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:23:56.942 07:04:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:23:56.942 07:04:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.942 07:04:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:56.942 07:04:03 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:23:56.942 07:04:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:23:56.942 07:04:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:23:56.942 07:04:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:23:56.942 07:04:03 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:56.942 07:04:03 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:56.942 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:56.943 fio-3.35 00:23:56.943 Starting 1 thread 00:23:56.943 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.471 00:23:59.471 test: (groupid=0, jobs=1): err= 0: pid=3117713: Sun May 12 07:04:06 2024 00:23:59.471 read: IOPS=9872, BW=38.6MiB/s (40.4MB/s)(77.4MiB/2006msec) 00:23:59.471 slat (nsec): min=1947, max=191723, avg=2547.46, stdev=1814.36 00:23:59.471 clat (usec): min=3737, max=12350, avg=7141.48, stdev=550.09 00:23:59.471 lat (usec): min=3761, max=12352, avg=7144.02, stdev=550.02 00:23:59.471 clat percentiles (usec): 00:23:59.471 | 1.00th=[ 5866], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6718], 00:23:59.471 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:23:59.471 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8029], 00:23:59.471 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[10028], 99.95th=[11207], 00:23:59.471 | 99.99th=[12125] 00:23:59.471 bw ( KiB/s): min=38544, max=40312, per=99.95%, avg=39470.00, stdev=859.31, samples=4 00:23:59.471 iops : min= 9636, max=10078, avg=9867.50, stdev=214.83, samples=4 00:23:59.471 write: IOPS=9884, BW=38.6MiB/s (40.5MB/s)(77.5MiB/2006msec); 0 zone resets 00:23:59.471 slat (nsec): min=2072, max=89788, avg=2671.21, stdev=1251.65 00:23:59.471 clat (usec): min=1465, max=11089, avg=5734.27, stdev=486.44 00:23:59.471 lat (usec): min=1472, max=11092, avg=5736.94, stdev=486.37 00:23:59.471 clat percentiles (usec): 00:23:59.471 | 1.00th=[ 4621], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5342], 00:23:59.471 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5866], 00:23:59.471 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6456], 00:23:59.471 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 8979], 99.95th=[ 9896], 00:23:59.471 | 99.99th=[11076] 00:23:59.471 bw ( KiB/s): min=39128, max=39808, per=100.00%, avg=39542.00, stdev=295.12, samples=4 00:23:59.471 iops : min= 9782, max= 9952, avg=9885.50, stdev=73.78, samples=4 00:23:59.471 lat (msec) : 2=0.01%, 4=0.10%, 10=99.83%, 20=0.07% 00:23:59.471 cpu : usr=51.37%, sys=39.30%, ctx=50, majf=0, minf=6 00:23:59.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:59.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:59.471 issued rwts: total=19804,19829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:59.471 00:23:59.471 Run status group 0 (all jobs): 00:23:59.472 READ: bw=38.6MiB/s (40.4MB/s), 38.6MiB/s-38.6MiB/s (40.4MB/s-40.4MB/s), io=77.4MiB (81.1MB), run=2006-2006msec 00:23:59.472 WRITE: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=77.5MiB (81.2MB), run=2006-2006msec 00:23:59.472 07:04:06 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:59.472 07:04:06 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:59.472 07:04:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:23:59.472 07:04:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:59.472 07:04:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:23:59.472 07:04:06 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:59.472 07:04:06 -- common/autotest_common.sh@1320 -- # shift 00:23:59.472 07:04:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:23:59.472 07:04:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.472 07:04:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:59.472 07:04:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:23:59.472 07:04:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:23:59.472 07:04:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:23:59.472 07:04:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:23:59.472 07:04:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.472 07:04:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:59.472 07:04:06 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:23:59.472 07:04:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:23:59.472 07:04:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:23:59.472 07:04:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:23:59.472 07:04:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:59.472 07:04:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:59.472 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:59.472 fio-3.35 00:23:59.472 Starting 1 thread 00:23:59.472 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.002 00:24:02.002 test: (groupid=0, jobs=1): err= 0: pid=3118058: Sun May 12 07:04:09 2024 00:24:02.002 read: IOPS=8229, BW=129MiB/s (135MB/s)(258MiB/2007msec) 00:24:02.002 slat (usec): min=2, max=113, avg= 3.78, stdev= 1.90 00:24:02.002 clat (usec): min=2265, max=20110, avg=9472.20, stdev=2295.26 00:24:02.002 lat (usec): min=2269, max=20113, avg=9475.98, stdev=2295.43 00:24:02.002 clat percentiles (usec): 00:24:02.002 | 1.00th=[ 4883], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7439], 00:24:02.002 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10028], 00:24:02.002 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12518], 95.00th=[13173], 00:24:02.002 | 99.00th=[15533], 99.50th=[16319], 99.90th=[17695], 99.95th=[17957], 00:24:02.002 | 99.99th=[19530] 00:24:02.002 bw ( KiB/s): min=61824, max=69984, per=50.30%, avg=66232.00, stdev=3393.25, samples=4 00:24:02.002 iops : min= 3864, max= 4374, avg=4139.50, stdev=212.08, samples=4 00:24:02.002 write: IOPS=4750, BW=74.2MiB/s (77.8MB/s)(135MiB/1822msec); 0 zone resets 00:24:02.002 slat (usec): min=30, max=137, avg=34.12, stdev= 5.59 00:24:02.002 clat (usec): min=3540, max=20899, avg=10789.81, stdev=1706.02 00:24:02.002 lat (usec): min=3576, max=20931, avg=10823.93, stdev=1706.16 00:24:02.002 clat percentiles (usec): 00:24:02.002 | 1.00th=[ 7373], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9372], 00:24:02.002 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10683], 60.00th=[11076], 00:24:02.002 | 70.00th=[11469], 80.00th=[12125], 90.00th=[13042], 95.00th=[13698], 00:24:02.002 | 99.00th=[15270], 99.50th=[15795], 99.90th=[17433], 99.95th=[20579], 00:24:02.002 | 99.99th=[20841] 00:24:02.002 bw ( KiB/s): min=64160, max=73408, per=90.56%, avg=68832.00, stdev=4044.76, samples=4 00:24:02.002 iops : min= 4010, max= 4588, avg=4302.00, stdev=252.80, samples=4 00:24:02.002 lat (msec) : 4=0.22%, 10=50.76%, 20=48.99%, 50=0.03% 00:24:02.002 cpu : usr=75.67%, sys=21.14%, ctx=16, majf=0, minf=2 00:24:02.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:02.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:02.002 issued rwts: total=16516,8655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:02.002 00:24:02.002 Run status group 0 (all jobs): 00:24:02.003 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=258MiB (271MB), run=2007-2007msec 00:24:02.003 WRITE: bw=74.2MiB/s (77.8MB/s), 74.2MiB/s-74.2MiB/s (77.8MB/s-77.8MB/s), io=135MiB (142MB), run=1822-1822msec 00:24:02.003 07:04:09 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.003 07:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.003 07:04:09 -- common/autotest_common.sh@10 -- # set +x 00:24:02.003 07:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.003 07:04:09 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:24:02.003 07:04:09 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:24:02.003 07:04:09 -- host/fio.sh@49 -- # get_nvme_bdfs 00:24:02.003 07:04:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:24:02.003 07:04:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:24:02.003 07:04:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:02.003 07:04:09 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:02.003 07:04:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:24:02.003 07:04:09 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:24:02.003 07:04:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:24:02.003 07:04:09 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:24:02.003 07:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.003 07:04:09 -- common/autotest_common.sh@10 -- # set +x 00:24:05.278 Nvme0n1 00:24:05.278 07:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:05.278 07:04:11 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:24:05.278 07:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:05.278 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.804 07:04:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.804 07:04:14 -- host/fio.sh@51 -- # ls_guid=d966a7fa-83a6-43ff-9f6a-73e201aa365f 00:24:07.804 07:04:14 -- host/fio.sh@52 -- # get_lvs_free_mb d966a7fa-83a6-43ff-9f6a-73e201aa365f 00:24:07.804 07:04:14 -- common/autotest_common.sh@1343 -- # local lvs_uuid=d966a7fa-83a6-43ff-9f6a-73e201aa365f 00:24:07.804 07:04:14 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:07.804 07:04:14 -- common/autotest_common.sh@1345 -- # local fc 00:24:07.804 07:04:14 -- common/autotest_common.sh@1346 -- # local cs 00:24:07.804 07:04:14 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:07.804 07:04:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.804 07:04:14 -- common/autotest_common.sh@10 -- # set +x 00:24:07.804 07:04:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.804 07:04:14 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:07.804 { 00:24:07.804 "uuid": "d966a7fa-83a6-43ff-9f6a-73e201aa365f", 00:24:07.804 "name": "lvs_0", 00:24:07.804 "base_bdev": "Nvme0n1", 00:24:07.804 "total_data_clusters": 930, 00:24:07.804 "free_clusters": 930, 00:24:07.804 "block_size": 512, 00:24:07.804 "cluster_size": 1073741824 00:24:07.804 } 00:24:07.804 ]' 00:24:07.804 07:04:14 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="d966a7fa-83a6-43ff-9f6a-73e201aa365f") .free_clusters' 00:24:07.804 07:04:14 -- common/autotest_common.sh@1348 -- # fc=930 00:24:07.804 07:04:14 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="d966a7fa-83a6-43ff-9f6a-73e201aa365f") .cluster_size' 00:24:07.804 07:04:14 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:24:07.804 07:04:14 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:24:07.804 07:04:14 -- common/autotest_common.sh@1353 -- # echo 952320 00:24:07.804 952320 00:24:07.804 07:04:14 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:24:07.804 07:04:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.804 07:04:14 -- common/autotest_common.sh@10 -- # set +x 00:24:07.804 390e6382-1de5-4081-b822-09febbf50158 00:24:07.804 07:04:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.804 07:04:14 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:24:07.804 07:04:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.804 07:04:14 -- common/autotest_common.sh@10 -- # set +x 00:24:07.804 07:04:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.804 07:04:14 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:24:07.804 07:04:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.804 07:04:14 -- common/autotest_common.sh@10 -- # set +x 00:24:07.804 07:04:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.804 07:04:14 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:07.804 07:04:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.804 07:04:14 -- common/autotest_common.sh@10 -- # set +x 00:24:07.804 07:04:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.804 07:04:14 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:07.804 07:04:14 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:07.804 07:04:14 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:07.804 07:04:14 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:07.804 07:04:14 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:07.804 07:04:14 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:07.804 07:04:14 -- common/autotest_common.sh@1320 -- # shift 00:24:07.804 07:04:14 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:07.804 07:04:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:07.804 07:04:14 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:07.804 07:04:14 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:07.804 07:04:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:07.804 07:04:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:07.804 07:04:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:07.804 07:04:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:07.804 07:04:14 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:07.804 07:04:14 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:07.804 07:04:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:07.804 07:04:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:07.804 07:04:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:07.804 07:04:14 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:07.804 07:04:14 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:08.062 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:08.062 fio-3.35 00:24:08.062 Starting 1 thread 00:24:08.062 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.584 00:24:10.584 test: (groupid=0, jobs=1): err= 0: pid=3119234: Sun May 12 07:04:17 2024 00:24:10.584 read: IOPS=6526, BW=25.5MiB/s (26.7MB/s)(51.2MiB/2007msec) 00:24:10.584 slat (nsec): min=1957, max=177932, avg=2576.64, stdev=2493.00 00:24:10.584 clat (usec): min=867, max=170954, avg=10810.31, stdev=11220.47 00:24:10.584 lat (usec): min=870, max=171002, avg=10812.89, stdev=11220.84 00:24:10.584 clat percentiles (msec): 00:24:10.584 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:24:10.584 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:24:10.584 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 12], 00:24:10.584 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:24:10.584 | 99.99th=[ 171] 00:24:10.584 bw ( KiB/s): min=18400, max=28816, per=99.84%, avg=26062.00, stdev=5112.07, samples=4 00:24:10.584 iops : min= 4600, max= 7204, avg=6515.50, stdev=1278.02, samples=4 00:24:10.584 write: IOPS=6537, BW=25.5MiB/s (26.8MB/s)(51.3MiB/2007msec); 0 zone resets 00:24:10.584 slat (usec): min=2, max=141, avg= 2.66, stdev= 1.78 00:24:10.584 clat (usec): min=398, max=168951, avg=8652.69, stdev=10515.60 00:24:10.584 lat (usec): min=401, max=168959, avg=8655.35, stdev=10515.98 00:24:10.584 clat percentiles (msec): 00:24:10.584 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:24:10.584 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:24:10.584 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:24:10.584 | 99.00th=[ 10], 99.50th=[ 14], 99.90th=[ 169], 99.95th=[ 169], 00:24:10.584 | 99.99th=[ 169] 00:24:10.584 bw ( KiB/s): min=19384, max=28608, per=99.89%, avg=26122.00, stdev=4495.25, samples=4 00:24:10.584 iops : min= 4846, max= 7152, avg=6530.50, stdev=1123.81, samples=4 00:24:10.584 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:24:10.584 lat (msec) : 2=0.03%, 4=0.19%, 10=73.97%, 20=25.31%, 250=0.49% 00:24:10.584 cpu : usr=53.64%, sys=40.58%, ctx=77, majf=0, minf=6 00:24:10.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:10.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:10.584 issued rwts: total=13098,13121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:10.584 00:24:10.584 Run status group 0 (all jobs): 00:24:10.584 READ: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=51.2MiB (53.6MB), run=2007-2007msec 00:24:10.584 WRITE: bw=25.5MiB/s (26.8MB/s), 25.5MiB/s-25.5MiB/s (26.8MB/s-26.8MB/s), io=51.3MiB (53.7MB), run=2007-2007msec 00:24:10.584 07:04:17 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:10.584 07:04:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.584 07:04:17 -- common/autotest_common.sh@10 -- # set +x 00:24:10.584 07:04:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:10.584 07:04:17 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:10.584 07:04:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:10.584 07:04:17 -- common/autotest_common.sh@10 -- # set +x 00:24:11.516 07:04:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.516 07:04:18 -- host/fio.sh@62 -- # ls_nested_guid=dafdc60c-795c-4b6f-afc0-c34d9e226044 00:24:11.516 07:04:18 -- host/fio.sh@63 -- # get_lvs_free_mb dafdc60c-795c-4b6f-afc0-c34d9e226044 00:24:11.516 07:04:18 -- common/autotest_common.sh@1343 -- # local lvs_uuid=dafdc60c-795c-4b6f-afc0-c34d9e226044 00:24:11.516 07:04:18 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:11.516 07:04:18 -- common/autotest_common.sh@1345 -- # local fc 00:24:11.516 07:04:18 -- common/autotest_common.sh@1346 -- # local cs 00:24:11.516 07:04:18 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:11.516 07:04:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.516 07:04:18 -- common/autotest_common.sh@10 -- # set +x 00:24:11.516 07:04:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.516 07:04:18 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:11.516 { 00:24:11.516 "uuid": "d966a7fa-83a6-43ff-9f6a-73e201aa365f", 00:24:11.516 "name": "lvs_0", 00:24:11.516 "base_bdev": "Nvme0n1", 00:24:11.516 "total_data_clusters": 930, 00:24:11.516 "free_clusters": 0, 00:24:11.516 "block_size": 512, 00:24:11.516 "cluster_size": 1073741824 00:24:11.516 }, 00:24:11.516 { 00:24:11.516 "uuid": "dafdc60c-795c-4b6f-afc0-c34d9e226044", 00:24:11.516 "name": "lvs_n_0", 00:24:11.516 "base_bdev": "390e6382-1de5-4081-b822-09febbf50158", 00:24:11.516 "total_data_clusters": 237847, 00:24:11.516 "free_clusters": 237847, 00:24:11.516 "block_size": 512, 00:24:11.516 "cluster_size": 4194304 00:24:11.516 } 00:24:11.516 ]' 00:24:11.516 07:04:18 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="dafdc60c-795c-4b6f-afc0-c34d9e226044") .free_clusters' 00:24:11.516 07:04:18 -- common/autotest_common.sh@1348 -- # fc=237847 00:24:11.516 07:04:18 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="dafdc60c-795c-4b6f-afc0-c34d9e226044") .cluster_size' 00:24:11.516 07:04:18 -- common/autotest_common.sh@1349 -- # cs=4194304 00:24:11.516 07:04:18 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:24:11.516 07:04:18 -- common/autotest_common.sh@1353 -- # echo 951388 00:24:11.516 951388 00:24:11.516 07:04:18 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:24:11.516 07:04:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.516 07:04:18 -- common/autotest_common.sh@10 -- # set +x 00:24:11.775 7d6923da-5cb0-4f1f-8593-7c467eb4124a 00:24:11.775 07:04:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.775 07:04:18 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:11.775 07:04:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.775 07:04:18 -- common/autotest_common.sh@10 -- # set +x 00:24:11.775 07:04:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.775 07:04:18 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:11.775 07:04:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.776 07:04:18 -- common/autotest_common.sh@10 -- # set +x 00:24:11.776 07:04:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.776 07:04:18 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:11.776 07:04:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.776 07:04:18 -- common/autotest_common.sh@10 -- # set +x 00:24:11.776 07:04:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.776 07:04:18 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:11.776 07:04:18 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:11.776 07:04:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:11.776 07:04:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:11.776 07:04:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:11.776 07:04:18 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:11.776 07:04:18 -- common/autotest_common.sh@1320 -- # shift 00:24:11.776 07:04:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:11.776 07:04:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:12.034 07:04:18 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:12.034 07:04:18 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:12.034 07:04:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:12.034 07:04:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:12.034 07:04:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:12.034 07:04:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:12.034 07:04:18 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:12.034 07:04:18 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:12.034 07:04:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:12.034 07:04:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:12.034 07:04:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:12.034 07:04:18 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:12.034 07:04:18 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:12.034 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:12.034 fio-3.35 00:24:12.034 Starting 1 thread 00:24:12.034 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.608 00:24:14.608 test: (groupid=0, jobs=1): err= 0: pid=3119728: Sun May 12 07:04:21 2024 00:24:14.608 read: IOPS=6182, BW=24.2MiB/s (25.3MB/s)(48.5MiB/2007msec) 00:24:14.608 slat (nsec): min=1997, max=147569, avg=2608.50, stdev=2126.37 00:24:14.608 clat (usec): min=4298, max=18849, avg=11493.07, stdev=974.53 00:24:14.608 lat (usec): min=4302, max=18851, avg=11495.67, stdev=974.46 00:24:14.608 clat percentiles (usec): 00:24:14.608 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:24:14.608 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:24:14.608 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:24:14.608 | 99.00th=[13698], 99.50th=[14091], 99.90th=[17433], 99.95th=[17695], 00:24:14.608 | 99.99th=[18744] 00:24:14.608 bw ( KiB/s): min=23384, max=25256, per=99.74%, avg=24666.00, stdev=867.56, samples=4 00:24:14.608 iops : min= 5846, max= 6314, avg=6166.50, stdev=216.89, samples=4 00:24:14.608 write: IOPS=6165, BW=24.1MiB/s (25.3MB/s)(48.3MiB/2007msec); 0 zone resets 00:24:14.608 slat (usec): min=2, max=106, avg= 2.69, stdev= 1.74 00:24:14.608 clat (usec): min=2118, max=16518, avg=9127.58, stdev=850.73 00:24:14.608 lat (usec): min=2124, max=16520, avg=9130.27, stdev=850.71 00:24:14.608 clat percentiles (usec): 00:24:14.608 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:24:14.608 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:24:14.608 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:24:14.608 | 99.00th=[10945], 99.50th=[11469], 99.90th=[13829], 99.95th=[15401], 00:24:14.608 | 99.99th=[16450] 00:24:14.608 bw ( KiB/s): min=24536, max=24704, per=99.94%, avg=24646.00, stdev=79.30, samples=4 00:24:14.608 iops : min= 6134, max= 6176, avg=6161.50, stdev=19.82, samples=4 00:24:14.608 lat (msec) : 4=0.04%, 10=45.79%, 20=54.16% 00:24:14.608 cpu : usr=52.94%, sys=41.33%, ctx=76, majf=0, minf=6 00:24:14.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:14.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:14.608 issued rwts: total=12409,12374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.608 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:14.608 00:24:14.608 Run status group 0 (all jobs): 00:24:14.608 READ: bw=24.2MiB/s (25.3MB/s), 24.2MiB/s-24.2MiB/s (25.3MB/s-25.3MB/s), io=48.5MiB (50.8MB), run=2007-2007msec 00:24:14.608 WRITE: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=48.3MiB (50.7MB), run=2007-2007msec 00:24:14.608 07:04:21 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:14.608 07:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:14.608 07:04:21 -- common/autotest_common.sh@10 -- # set +x 00:24:14.608 07:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:14.608 07:04:21 -- host/fio.sh@72 -- # sync 00:24:14.608 07:04:21 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:24:14.608 07:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:14.608 07:04:21 -- common/autotest_common.sh@10 -- # set +x 00:24:18.795 07:04:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.795 07:04:25 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:24:18.795 07:04:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.795 07:04:25 -- common/autotest_common.sh@10 -- # set +x 00:24:18.795 07:04:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.795 07:04:25 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:24:18.795 07:04:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.795 07:04:25 -- common/autotest_common.sh@10 -- # set +x 00:24:21.330 07:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.330 07:04:27 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:24:21.330 07:04:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.330 07:04:27 -- common/autotest_common.sh@10 -- # set +x 00:24:21.330 07:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.330 07:04:27 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:24:21.330 07:04:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.330 07:04:27 -- common/autotest_common.sh@10 -- # set +x 00:24:22.707 07:04:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.707 07:04:29 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:24:22.707 07:04:29 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:24:22.707 07:04:29 -- host/fio.sh@84 -- # nvmftestfini 00:24:22.707 07:04:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:22.707 07:04:29 -- nvmf/common.sh@116 -- # sync 00:24:22.707 07:04:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:22.707 07:04:29 -- nvmf/common.sh@119 -- # set +e 00:24:22.707 07:04:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:22.707 07:04:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:22.707 rmmod nvme_tcp 00:24:22.707 rmmod nvme_fabrics 00:24:22.707 rmmod nvme_keyring 00:24:22.707 07:04:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:22.707 07:04:29 -- nvmf/common.sh@123 -- # set -e 00:24:22.707 07:04:29 -- nvmf/common.sh@124 -- # return 0 00:24:22.707 07:04:29 -- nvmf/common.sh@477 -- # '[' -n 3117359 ']' 00:24:22.707 07:04:29 -- nvmf/common.sh@478 -- # killprocess 3117359 00:24:22.707 07:04:29 -- common/autotest_common.sh@926 -- # '[' -z 3117359 ']' 00:24:22.707 07:04:29 -- common/autotest_common.sh@930 -- # kill -0 3117359 00:24:22.707 07:04:29 -- common/autotest_common.sh@931 -- # uname 00:24:22.707 07:04:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:22.707 07:04:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3117359 00:24:22.707 07:04:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:22.707 07:04:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:22.707 07:04:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3117359' 00:24:22.707 killing process with pid 3117359 00:24:22.708 07:04:29 -- common/autotest_common.sh@945 -- # kill 3117359 00:24:22.708 07:04:29 -- common/autotest_common.sh@950 -- # wait 3117359 00:24:22.966 07:04:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:22.966 07:04:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:22.966 07:04:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:22.966 07:04:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.966 07:04:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:22.966 07:04:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.966 07:04:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.966 07:04:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.873 07:04:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:24.873 00:24:24.873 real 0m31.348s 00:24:24.873 user 1m53.623s 00:24:24.873 sys 0m6.146s 00:24:24.873 07:04:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:24.873 07:04:31 -- common/autotest_common.sh@10 -- # set +x 00:24:24.873 ************************************ 00:24:24.873 END TEST nvmf_fio_host 00:24:24.873 ************************************ 00:24:24.873 07:04:31 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:24.873 07:04:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:24.873 07:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:24.873 07:04:31 -- common/autotest_common.sh@10 -- # set +x 00:24:24.873 ************************************ 00:24:24.873 START TEST nvmf_failover 00:24:24.873 ************************************ 00:24:24.873 07:04:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:25.133 * Looking for test storage... 00:24:25.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.133 07:04:32 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.133 07:04:32 -- nvmf/common.sh@7 -- # uname -s 00:24:25.133 07:04:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.133 07:04:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.133 07:04:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.133 07:04:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.133 07:04:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.133 07:04:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.133 07:04:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.133 07:04:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.133 07:04:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.133 07:04:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.133 07:04:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:25.133 07:04:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:25.133 07:04:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.133 07:04:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.133 07:04:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.133 07:04:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.133 07:04:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.133 07:04:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.133 07:04:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.133 07:04:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.133 07:04:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.133 07:04:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.133 07:04:32 -- paths/export.sh@5 -- # export PATH 00:24:25.133 07:04:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.133 07:04:32 -- nvmf/common.sh@46 -- # : 0 00:24:25.133 07:04:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:25.133 07:04:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:25.133 07:04:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:25.133 07:04:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.133 07:04:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.133 07:04:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:25.133 07:04:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:25.133 07:04:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:25.133 07:04:32 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:25.133 07:04:32 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:25.133 07:04:32 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:25.133 07:04:32 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:25.133 07:04:32 -- host/failover.sh@18 -- # nvmftestinit 00:24:25.133 07:04:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:25.133 07:04:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.133 07:04:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:25.133 07:04:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:25.133 07:04:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:25.133 07:04:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.133 07:04:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.133 07:04:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.133 07:04:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:25.133 07:04:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:25.133 07:04:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:25.133 07:04:32 -- common/autotest_common.sh@10 -- # set +x 00:24:27.036 07:04:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:27.036 07:04:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:27.036 07:04:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:27.036 07:04:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:27.036 07:04:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:27.036 07:04:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:27.036 07:04:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:27.036 07:04:33 -- nvmf/common.sh@294 -- # net_devs=() 00:24:27.036 07:04:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:27.036 07:04:33 -- nvmf/common.sh@295 -- # e810=() 00:24:27.036 07:04:33 -- nvmf/common.sh@295 -- # local -ga e810 00:24:27.036 07:04:33 -- nvmf/common.sh@296 -- # x722=() 00:24:27.036 07:04:33 -- nvmf/common.sh@296 -- # local -ga x722 00:24:27.036 07:04:33 -- nvmf/common.sh@297 -- # mlx=() 00:24:27.036 07:04:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:27.036 07:04:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.036 07:04:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.036 07:04:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.036 07:04:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.036 07:04:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.036 07:04:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.036 07:04:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.036 07:04:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.036 07:04:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.036 07:04:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.036 07:04:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.036 07:04:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:27.036 07:04:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:27.036 07:04:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:27.036 07:04:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:27.036 07:04:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:27.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:27.036 07:04:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:27.036 07:04:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:27.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:27.036 07:04:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:27.036 07:04:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:27.036 07:04:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.036 07:04:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:27.036 07:04:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.036 07:04:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:27.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:27.036 07:04:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.036 07:04:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:27.036 07:04:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.036 07:04:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:27.036 07:04:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.036 07:04:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:27.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:27.036 07:04:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.036 07:04:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:27.036 07:04:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:27.036 07:04:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:27.036 07:04:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:27.036 07:04:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.036 07:04:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.036 07:04:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.036 07:04:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:27.036 07:04:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.036 07:04:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.036 07:04:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:27.036 07:04:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.037 07:04:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.037 07:04:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:27.037 07:04:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:27.037 07:04:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.037 07:04:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.037 07:04:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.037 07:04:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.037 07:04:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:27.037 07:04:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.037 07:04:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.037 07:04:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.037 07:04:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:27.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:24:27.037 00:24:27.037 --- 10.0.0.2 ping statistics --- 00:24:27.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.037 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:24:27.037 07:04:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:24:27.037 00:24:27.037 --- 10.0.0.1 ping statistics --- 00:24:27.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.037 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:27.037 07:04:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.037 07:04:34 -- nvmf/common.sh@410 -- # return 0 00:24:27.037 07:04:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:27.037 07:04:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.037 07:04:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:27.037 07:04:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:27.037 07:04:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.037 07:04:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:27.037 07:04:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:27.037 07:04:34 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:27.037 07:04:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:27.037 07:04:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:27.037 07:04:34 -- common/autotest_common.sh@10 -- # set +x 00:24:27.037 07:04:34 -- nvmf/common.sh@469 -- # nvmfpid=3122883 00:24:27.037 07:04:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:27.037 07:04:34 -- nvmf/common.sh@470 -- # waitforlisten 3122883 00:24:27.037 07:04:34 -- common/autotest_common.sh@819 -- # '[' -z 3122883 ']' 00:24:27.037 07:04:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.037 07:04:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:27.037 07:04:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.037 07:04:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:27.037 07:04:34 -- common/autotest_common.sh@10 -- # set +x 00:24:27.037 [2024-05-12 07:04:34.115086] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:27.037 [2024-05-12 07:04:34.115159] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.037 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.295 [2024-05-12 07:04:34.179296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:27.295 [2024-05-12 07:04:34.287658] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:27.295 [2024-05-12 07:04:34.287809] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.295 [2024-05-12 07:04:34.287827] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.295 [2024-05-12 07:04:34.287840] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.295 [2024-05-12 07:04:34.287925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.295 [2024-05-12 07:04:34.287988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.295 [2024-05-12 07:04:34.287991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.231 07:04:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:28.231 07:04:35 -- common/autotest_common.sh@852 -- # return 0 00:24:28.231 07:04:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:28.231 07:04:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:28.231 07:04:35 -- common/autotest_common.sh@10 -- # set +x 00:24:28.231 07:04:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.231 07:04:35 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:28.231 [2024-05-12 07:04:35.307607] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.231 07:04:35 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:28.489 Malloc0 00:24:28.489 07:04:35 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:28.746 07:04:35 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.003 07:04:36 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.261 [2024-05-12 07:04:36.287148] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.261 07:04:36 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:29.519 [2024-05-12 07:04:36.523865] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:29.519 07:04:36 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:29.777 [2024-05-12 07:04:36.776721] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:29.777 07:04:36 -- host/failover.sh@31 -- # bdevperf_pid=3123309 00:24:29.777 07:04:36 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:29.777 07:04:36 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:29.777 07:04:36 -- host/failover.sh@34 -- # waitforlisten 3123309 /var/tmp/bdevperf.sock 00:24:29.777 07:04:36 -- common/autotest_common.sh@819 -- # '[' -z 3123309 ']' 00:24:29.777 07:04:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.777 07:04:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:29.777 07:04:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.777 07:04:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:29.777 07:04:36 -- common/autotest_common.sh@10 -- # set +x 00:24:30.710 07:04:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:30.710 07:04:37 -- common/autotest_common.sh@852 -- # return 0 00:24:30.710 07:04:37 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:31.285 NVMe0n1 00:24:31.285 07:04:38 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:31.573 00:24:31.573 07:04:38 -- host/failover.sh@39 -- # run_test_pid=3123462 00:24:31.573 07:04:38 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.573 07:04:38 -- host/failover.sh@41 -- # sleep 1 00:24:32.517 07:04:39 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.777 [2024-05-12 07:04:39.837510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.837974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.838004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.838016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.838028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.838039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.838050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.838061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 [2024-05-12 07:04:39.838072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d921d0 is same with the state(5) to be set 00:24:32.777 07:04:39 -- host/failover.sh@45 -- # sleep 3 00:24:36.061 07:04:42 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.319 00:24:36.319 07:04:43 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:36.577 [2024-05-12 07:04:43.583508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.583994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.577 [2024-05-12 07:04:43.584203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.578 [2024-05-12 07:04:43.584214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93720 is same with the state(5) to be set 00:24:36.578 07:04:43 -- host/failover.sh@50 -- # sleep 3 00:24:39.865 07:04:46 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.865 [2024-05-12 07:04:46.846204] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.865 07:04:46 -- host/failover.sh@55 -- # sleep 1 00:24:40.795 07:04:47 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:41.055 [2024-05-12 07:04:48.096831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.096916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.096931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.096943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.096954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.096966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.096994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 [2024-05-12 07:04:48.097658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93e00 is same with the state(5) to be set 00:24:41.055 07:04:48 -- host/failover.sh@59 -- # wait 3123462 00:24:47.627 0 00:24:47.627 07:04:53 -- host/failover.sh@61 -- # killprocess 3123309 00:24:47.627 07:04:53 -- common/autotest_common.sh@926 -- # '[' -z 3123309 ']' 00:24:47.627 07:04:53 -- common/autotest_common.sh@930 -- # kill -0 3123309 00:24:47.627 07:04:53 -- common/autotest_common.sh@931 -- # uname 00:24:47.627 07:04:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:47.627 07:04:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3123309 00:24:47.627 07:04:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:47.627 07:04:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:47.628 07:04:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3123309' 00:24:47.628 killing process with pid 3123309 00:24:47.628 07:04:53 -- common/autotest_common.sh@945 -- # kill 3123309 00:24:47.628 07:04:53 -- common/autotest_common.sh@950 -- # wait 3123309 00:24:47.628 07:04:54 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:47.628 [2024-05-12 07:04:36.829937] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:47.628 [2024-05-12 07:04:36.830047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3123309 ] 00:24:47.628 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.628 [2024-05-12 07:04:36.890064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.628 [2024-05-12 07:04:36.997072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.628 Running I/O for 15 seconds... 00:24:47.628 [2024-05-12 07:04:39.838436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.838976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.838991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.628 [2024-05-12 07:04:39.839439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.628 [2024-05-12 07:04:39.839496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.628 [2024-05-12 07:04:39.839553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.628 [2024-05-12 07:04:39.839568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.628 [2024-05-12 07:04:39.839582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.839617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.839648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.839676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.839731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.839765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.839797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.839828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.839861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.839891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.839920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.839949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.839978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.839994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.629 [2024-05-12 07:04:39.840664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.629 [2024-05-12 07:04:39.840839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.629 [2024-05-12 07:04:39.840855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.840869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.840884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.840898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.840913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.840927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.840943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.840956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.840972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.840986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.841059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.841087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.841115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.841175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.841205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.841518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.841578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.841634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.841806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.841835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.630 [2024-05-12 07:04:39.841865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.841972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.841986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.842001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.842030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.842045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.842059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.842073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.630 [2024-05-12 07:04:39.842086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.630 [2024-05-12 07:04:39.842102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:39.842115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.631 [2024-05-12 07:04:39.842143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:39.842171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:39.842200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:39.842228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:39.842257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:39.842286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:39.842318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:39.842348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:39.842376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9b690 is same with the state(5) to be set 00:24:47.631 [2024-05-12 07:04:39.842408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.631 [2024-05-12 07:04:39.842420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.631 [2024-05-12 07:04:39.842431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118192 len:8 PRP1 0x0 PRP2 0x0 00:24:47.631 [2024-05-12 07:04:39.842444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842510] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d9b690 was disconnected and freed. reset controller. 00:24:47.631 [2024-05-12 07:04:39.842536] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:47.631 [2024-05-12 07:04:39.842570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.631 [2024-05-12 07:04:39.842603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.631 [2024-05-12 07:04:39.842632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.631 [2024-05-12 07:04:39.842659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.631 [2024-05-12 07:04:39.842686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:39.842710] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.631 [2024-05-12 07:04:39.842751] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7cbd0 (9): Bad file descriptor 00:24:47.631 [2024-05-12 07:04:39.845052] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.631 [2024-05-12 07:04:39.881270] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.631 [2024-05-12 07:04:43.584423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.584984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.584999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.585013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.585044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.585058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.585072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.585086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.585100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.585114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.585128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.631 [2024-05-12 07:04:43.585141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.585156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.631 [2024-05-12 07:04:43.585169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.631 [2024-05-12 07:04:43.585183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.585197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.585224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.585251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.585511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.585539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.585593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.585649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.585760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.585846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.585874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.585903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.585931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.585960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.585975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.586002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.586017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.586030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.586045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.586061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.586076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.586090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.586104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.586118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.586132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.632 [2024-05-12 07:04:43.586146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.632 [2024-05-12 07:04:43.586160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.632 [2024-05-12 07:04:43.586173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.633 [2024-05-12 07:04:43.586228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.633 [2024-05-12 07:04:43.586480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.633 [2024-05-12 07:04:43.586508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.633 [2024-05-12 07:04:43.586591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.633 [2024-05-12 07:04:43.586645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.633 [2024-05-12 07:04:43.586726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.586976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.586989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.633 [2024-05-12 07:04:43.587032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.587060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.633 [2024-05-12 07:04:43.587087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.633 [2024-05-12 07:04:43.587115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.587142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.633 [2024-05-12 07:04:43.587170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.587201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.587229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.587256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.587284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.587311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.587339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.633 [2024-05-12 07:04:43.587353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.633 [2024-05-12 07:04:43.587368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.587397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.587426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.634 [2024-05-12 07:04:43.587454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.634 [2024-05-12 07:04:43.587482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.587510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.634 [2024-05-12 07:04:43.587541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.587570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.634 [2024-05-12 07:04:43.587598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.587626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.634 [2024-05-12 07:04:43.587654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.587704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.587736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.634 [2024-05-12 07:04:43.587781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.634 [2024-05-12 07:04:43.587811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.587840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.587870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.634 [2024-05-12 07:04:43.587900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.587929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.587963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.587978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.588006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.588036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.634 [2024-05-12 07:04:43.588081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.634 [2024-05-12 07:04:43.588110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.588138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.588166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.588193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.588221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.588249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.588291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:43.588320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d89050 is same with the state(5) to be set 00:24:47.634 [2024-05-12 07:04:43.588351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.634 [2024-05-12 07:04:43.588366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.634 [2024-05-12 07:04:43.588379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:592 len:8 PRP1 0x0 PRP2 0x0 00:24:47.634 [2024-05-12 07:04:43.588392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588453] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d89050 was disconnected and freed. reset controller. 00:24:47.634 [2024-05-12 07:04:43.588471] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:47.634 [2024-05-12 07:04:43.588517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.634 [2024-05-12 07:04:43.588536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.634 [2024-05-12 07:04:43.588566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.634 [2024-05-12 07:04:43.588594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.634 [2024-05-12 07:04:43.588621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:43.588634] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.634 [2024-05-12 07:04:43.588673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7cbd0 (9): Bad file descriptor 00:24:47.634 [2024-05-12 07:04:43.590952] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.634 [2024-05-12 07:04:43.622990] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.634 [2024-05-12 07:04:48.097893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:48.097937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:48.097967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.634 [2024-05-12 07:04:48.097985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.634 [2024-05-12 07:04:48.098002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.635 [2024-05-12 07:04:48.098536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.635 [2024-05-12 07:04:48.098564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.635 [2024-05-12 07:04:48.098591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.635 [2024-05-12 07:04:48.098673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.635 [2024-05-12 07:04:48.098722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.635 [2024-05-12 07:04:48.098753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.635 [2024-05-12 07:04:48.098781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.635 [2024-05-12 07:04:48.098810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.098973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.098987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.099002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.099015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.099045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.099058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.099073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.635 [2024-05-12 07:04:48.099086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.099101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.635 [2024-05-12 07:04:48.099113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.099128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.099141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.099155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.099168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.099183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.099195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.099209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.635 [2024-05-12 07:04:48.099226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.635 [2024-05-12 07:04:48.099241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.635 [2024-05-12 07:04:48.099254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.636 [2024-05-12 07:04:48.099392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.636 [2024-05-12 07:04:48.099419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.636 [2024-05-12 07:04:48.099869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.636 [2024-05-12 07:04:48.099898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.636 [2024-05-12 07:04:48.099955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.099973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.099987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.100002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.100030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.100046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.100059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.100073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.100086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.100100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.100113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.100128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.100140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.100154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.636 [2024-05-12 07:04:48.100168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.100182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.636 [2024-05-12 07:04:48.100196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.100210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.636 [2024-05-12 07:04:48.100222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.100236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.636 [2024-05-12 07:04:48.100249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.100263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.636 [2024-05-12 07:04:48.100276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.636 [2024-05-12 07:04:48.100290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.100303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.100389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.100636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.100662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.100816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.100958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.100973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.100986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.101028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.101056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.101084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.101111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.101137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.101170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.101197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.101225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.101252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.101284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.101311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.101338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.101365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.101392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.101419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.101446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.101473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.637 [2024-05-12 07:04:48.101504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.637 [2024-05-12 07:04:48.101532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.637 [2024-05-12 07:04:48.101546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.638 [2024-05-12 07:04:48.101559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.638 [2024-05-12 07:04:48.101574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.638 [2024-05-12 07:04:48.101587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.638 [2024-05-12 07:04:48.101601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.638 [2024-05-12 07:04:48.101614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.638 [2024-05-12 07:04:48.101628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.638 [2024-05-12 07:04:48.101641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.638 [2024-05-12 07:04:48.101655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.638 [2024-05-12 07:04:48.101668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.638 [2024-05-12 07:04:48.101706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.638 [2024-05-12 07:04:48.101722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.638 [2024-05-12 07:04:48.101743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.638 [2024-05-12 07:04:48.101757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.638 [2024-05-12 07:04:48.101772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9d530 is same with the state(5) to be set 00:24:47.638 [2024-05-12 07:04:48.101789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.638 [2024-05-12 07:04:48.101802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.638 [2024-05-12 07:04:48.101813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95000 len:8 PRP1 0x0 PRP2 0x0 00:24:47.638 [2024-05-12 07:04:48.101825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.638 [2024-05-12 07:04:48.101890] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d9d530 was disconnected and freed. reset controller. 00:24:47.638 [2024-05-12 07:04:48.101909] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:47.638 [2024-05-12 07:04:48.101955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.638 [2024-05-12 07:04:48.101975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.638 [2024-05-12 07:04:48.101994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.638 [2024-05-12 07:04:48.102008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.638 [2024-05-12 07:04:48.102022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.638 [2024-05-12 07:04:48.102036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.638 [2024-05-12 07:04:48.102050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.638 [2024-05-12 07:04:48.102063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.638 [2024-05-12 07:04:48.102078] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.638 [2024-05-12 07:04:48.102118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7cbd0 (9): Bad file descriptor 00:24:47.638 [2024-05-12 07:04:48.104290] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.638 [2024-05-12 07:04:48.136824] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.638 00:24:47.638 Latency(us) 00:24:47.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.638 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:47.638 Verification LBA range: start 0x0 length 0x4000 00:24:47.638 NVMe0n1 : 15.01 12445.69 48.62 380.41 0.00 9962.74 861.68 14757.74 00:24:47.638 =================================================================================================================== 00:24:47.638 Total : 12445.69 48.62 380.41 0.00 9962.74 861.68 14757.74 00:24:47.638 Received shutdown signal, test time was about 15.000000 seconds 00:24:47.638 00:24:47.638 Latency(us) 00:24:47.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.638 =================================================================================================================== 00:24:47.638 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.638 07:04:54 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:47.638 07:04:54 -- host/failover.sh@65 -- # count=3 00:24:47.638 07:04:54 -- host/failover.sh@67 -- # (( count != 3 )) 00:24:47.638 07:04:54 -- host/failover.sh@73 -- # bdevperf_pid=3125355 00:24:47.638 07:04:54 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:47.638 07:04:54 -- host/failover.sh@75 -- # waitforlisten 3125355 /var/tmp/bdevperf.sock 00:24:47.638 07:04:54 -- common/autotest_common.sh@819 -- # '[' -z 3125355 ']' 00:24:47.638 07:04:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.638 07:04:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:47.638 07:04:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.638 07:04:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:47.638 07:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:48.204 07:04:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:48.204 07:04:55 -- common/autotest_common.sh@852 -- # return 0 00:24:48.204 07:04:55 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:48.204 [2024-05-12 07:04:55.329864] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:48.462 07:04:55 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:48.721 [2024-05-12 07:04:55.610653] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:48.721 07:04:55 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:48.979 NVMe0n1 00:24:48.979 07:04:55 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:49.236 00:24:49.236 07:04:56 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:49.494 00:24:49.494 07:04:56 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:49.494 07:04:56 -- host/failover.sh@82 -- # grep -q NVMe0 00:24:49.752 07:04:56 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:50.012 07:04:57 -- host/failover.sh@87 -- # sleep 3 00:24:53.334 07:05:00 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:53.334 07:05:00 -- host/failover.sh@88 -- # grep -q NVMe0 00:24:53.334 07:05:00 -- host/failover.sh@90 -- # run_test_pid=3126172 00:24:53.334 07:05:00 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:53.334 07:05:00 -- host/failover.sh@92 -- # wait 3126172 00:24:54.708 0 00:24:54.708 07:05:01 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:54.708 [2024-05-12 07:04:54.087556] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:54.708 [2024-05-12 07:04:54.087658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125355 ] 00:24:54.708 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.708 [2024-05-12 07:04:54.149766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.708 [2024-05-12 07:04:54.258328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.708 [2024-05-12 07:04:57.043216] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:54.708 [2024-05-12 07:04:57.043298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.708 [2024-05-12 07:04:57.043333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.708 [2024-05-12 07:04:57.043350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.708 [2024-05-12 07:04:57.043363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.708 [2024-05-12 07:04:57.043392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.708 [2024-05-12 07:04:57.043411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.708 [2024-05-12 07:04:57.043426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.708 [2024-05-12 07:04:57.043440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.708 [2024-05-12 07:04:57.043454] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.708 [2024-05-12 07:04:57.043491] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.708 [2024-05-12 07:04:57.043523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b2bd0 (9): Bad file descriptor 00:24:54.708 [2024-05-12 07:04:57.050982] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:54.708 Running I/O for 1 seconds... 00:24:54.708 00:24:54.708 Latency(us) 00:24:54.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.708 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:54.708 Verification LBA range: start 0x0 length 0x4000 00:24:54.708 NVMe0n1 : 1.01 13027.75 50.89 0.00 0.00 9782.74 1243.97 17670.45 00:24:54.708 =================================================================================================================== 00:24:54.708 Total : 13027.75 50.89 0.00 0.00 9782.74 1243.97 17670.45 00:24:54.708 07:05:01 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.708 07:05:01 -- host/failover.sh@95 -- # grep -q NVMe0 00:24:54.708 07:05:01 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.965 07:05:01 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.965 07:05:01 -- host/failover.sh@99 -- # grep -q NVMe0 00:24:55.223 07:05:02 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.481 07:05:02 -- host/failover.sh@101 -- # sleep 3 00:24:58.765 07:05:05 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:58.765 07:05:05 -- host/failover.sh@103 -- # grep -q NVMe0 00:24:58.765 07:05:05 -- host/failover.sh@108 -- # killprocess 3125355 00:24:58.765 07:05:05 -- common/autotest_common.sh@926 -- # '[' -z 3125355 ']' 00:24:58.765 07:05:05 -- common/autotest_common.sh@930 -- # kill -0 3125355 00:24:58.765 07:05:05 -- common/autotest_common.sh@931 -- # uname 00:24:58.765 07:05:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:58.765 07:05:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3125355 00:24:58.765 07:05:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:58.765 07:05:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:58.765 07:05:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3125355' 00:24:58.765 killing process with pid 3125355 00:24:58.765 07:05:05 -- common/autotest_common.sh@945 -- # kill 3125355 00:24:58.765 07:05:05 -- common/autotest_common.sh@950 -- # wait 3125355 00:24:59.022 07:05:05 -- host/failover.sh@110 -- # sync 00:24:59.022 07:05:05 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.281 07:05:06 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:59.281 07:05:06 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:59.281 07:05:06 -- host/failover.sh@116 -- # nvmftestfini 00:24:59.281 07:05:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:59.281 07:05:06 -- nvmf/common.sh@116 -- # sync 00:24:59.281 07:05:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:59.281 07:05:06 -- nvmf/common.sh@119 -- # set +e 00:24:59.281 07:05:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:59.281 07:05:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:59.281 rmmod nvme_tcp 00:24:59.281 rmmod nvme_fabrics 00:24:59.281 rmmod nvme_keyring 00:24:59.281 07:05:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:59.281 07:05:06 -- nvmf/common.sh@123 -- # set -e 00:24:59.281 07:05:06 -- nvmf/common.sh@124 -- # return 0 00:24:59.281 07:05:06 -- nvmf/common.sh@477 -- # '[' -n 3122883 ']' 00:24:59.281 07:05:06 -- nvmf/common.sh@478 -- # killprocess 3122883 00:24:59.281 07:05:06 -- common/autotest_common.sh@926 -- # '[' -z 3122883 ']' 00:24:59.281 07:05:06 -- common/autotest_common.sh@930 -- # kill -0 3122883 00:24:59.281 07:05:06 -- common/autotest_common.sh@931 -- # uname 00:24:59.281 07:05:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:59.281 07:05:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3122883 00:24:59.281 07:05:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:59.281 07:05:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:59.281 07:05:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3122883' 00:24:59.281 killing process with pid 3122883 00:24:59.281 07:05:06 -- common/autotest_common.sh@945 -- # kill 3122883 00:24:59.281 07:05:06 -- common/autotest_common.sh@950 -- # wait 3122883 00:24:59.540 07:05:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:59.540 07:05:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:59.540 07:05:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:59.540 07:05:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.540 07:05:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:59.540 07:05:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.540 07:05:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.540 07:05:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.078 07:05:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:02.078 00:25:02.078 real 0m36.648s 00:25:02.078 user 2m10.096s 00:25:02.078 sys 0m5.841s 00:25:02.078 07:05:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.078 07:05:08 -- common/autotest_common.sh@10 -- # set +x 00:25:02.078 ************************************ 00:25:02.078 END TEST nvmf_failover 00:25:02.078 ************************************ 00:25:02.078 07:05:08 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:02.078 07:05:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:02.078 07:05:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:02.078 07:05:08 -- common/autotest_common.sh@10 -- # set +x 00:25:02.078 ************************************ 00:25:02.078 START TEST nvmf_discovery 00:25:02.078 ************************************ 00:25:02.078 07:05:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:02.078 * Looking for test storage... 00:25:02.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.078 07:05:08 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.078 07:05:08 -- nvmf/common.sh@7 -- # uname -s 00:25:02.078 07:05:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.078 07:05:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.078 07:05:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.078 07:05:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.078 07:05:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.078 07:05:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.078 07:05:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.078 07:05:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.078 07:05:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.078 07:05:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.078 07:05:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:02.078 07:05:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:02.078 07:05:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.078 07:05:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.078 07:05:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.078 07:05:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.078 07:05:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.078 07:05:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.078 07:05:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.078 07:05:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.078 07:05:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.078 07:05:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.078 07:05:08 -- paths/export.sh@5 -- # export PATH 00:25:02.078 07:05:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.078 07:05:08 -- nvmf/common.sh@46 -- # : 0 00:25:02.078 07:05:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:02.078 07:05:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:02.078 07:05:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:02.078 07:05:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.078 07:05:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.078 07:05:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:02.078 07:05:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:02.078 07:05:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:02.078 07:05:08 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:02.078 07:05:08 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:02.078 07:05:08 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:02.078 07:05:08 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:02.078 07:05:08 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:02.078 07:05:08 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:02.078 07:05:08 -- host/discovery.sh@25 -- # nvmftestinit 00:25:02.078 07:05:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:02.078 07:05:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.078 07:05:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:02.078 07:05:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:02.078 07:05:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:02.079 07:05:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.079 07:05:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.079 07:05:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.079 07:05:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:02.079 07:05:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:02.079 07:05:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:02.079 07:05:08 -- common/autotest_common.sh@10 -- # set +x 00:25:03.456 07:05:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:03.456 07:05:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:03.456 07:05:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:03.456 07:05:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:03.456 07:05:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:03.456 07:05:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:03.456 07:05:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:03.456 07:05:10 -- nvmf/common.sh@294 -- # net_devs=() 00:25:03.456 07:05:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:03.456 07:05:10 -- nvmf/common.sh@295 -- # e810=() 00:25:03.456 07:05:10 -- nvmf/common.sh@295 -- # local -ga e810 00:25:03.456 07:05:10 -- nvmf/common.sh@296 -- # x722=() 00:25:03.456 07:05:10 -- nvmf/common.sh@296 -- # local -ga x722 00:25:03.456 07:05:10 -- nvmf/common.sh@297 -- # mlx=() 00:25:03.456 07:05:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:03.456 07:05:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.456 07:05:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.715 07:05:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.715 07:05:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.715 07:05:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.715 07:05:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.715 07:05:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.715 07:05:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.715 07:05:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.715 07:05:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.715 07:05:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.715 07:05:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:03.715 07:05:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:03.715 07:05:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:03.715 07:05:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:03.715 07:05:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:03.715 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:03.715 07:05:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:03.715 07:05:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:03.715 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:03.715 07:05:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:03.715 07:05:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:03.715 07:05:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.715 07:05:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:03.715 07:05:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.715 07:05:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:03.715 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:03.715 07:05:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.715 07:05:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:03.715 07:05:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.715 07:05:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:03.715 07:05:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.715 07:05:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:03.715 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:03.715 07:05:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.715 07:05:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:03.715 07:05:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:03.715 07:05:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:03.715 07:05:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.715 07:05:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.715 07:05:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.715 07:05:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:03.715 07:05:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.715 07:05:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.715 07:05:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:03.715 07:05:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.715 07:05:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.715 07:05:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:03.715 07:05:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:03.715 07:05:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.715 07:05:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.715 07:05:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.715 07:05:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.715 07:05:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:03.715 07:05:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.715 07:05:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.715 07:05:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.715 07:05:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:03.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:25:03.715 00:25:03.715 --- 10.0.0.2 ping statistics --- 00:25:03.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.715 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:25:03.715 07:05:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:25:03.715 00:25:03.715 --- 10.0.0.1 ping statistics --- 00:25:03.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.715 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:25:03.715 07:05:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.715 07:05:10 -- nvmf/common.sh@410 -- # return 0 00:25:03.715 07:05:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:03.715 07:05:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.715 07:05:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:03.715 07:05:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.715 07:05:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:03.715 07:05:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:03.715 07:05:10 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:03.715 07:05:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:03.715 07:05:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:03.715 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:25:03.715 07:05:10 -- nvmf/common.sh@469 -- # nvmfpid=3128807 00:25:03.715 07:05:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:03.715 07:05:10 -- nvmf/common.sh@470 -- # waitforlisten 3128807 00:25:03.715 07:05:10 -- common/autotest_common.sh@819 -- # '[' -z 3128807 ']' 00:25:03.715 07:05:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.715 07:05:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:03.715 07:05:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.715 07:05:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:03.715 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:25:03.715 [2024-05-12 07:05:10.789033] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:03.715 [2024-05-12 07:05:10.789123] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.715 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.975 [2024-05-12 07:05:10.853449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.975 [2024-05-12 07:05:10.957284] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:03.975 [2024-05-12 07:05:10.957431] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.975 [2024-05-12 07:05:10.957446] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.975 [2024-05-12 07:05:10.957458] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.975 [2024-05-12 07:05:10.957490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.912 07:05:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:04.912 07:05:11 -- common/autotest_common.sh@852 -- # return 0 00:25:04.912 07:05:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:04.912 07:05:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:04.912 07:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:04.912 07:05:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.912 07:05:11 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.912 07:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.912 07:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:04.912 [2024-05-12 07:05:11.785322] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.912 07:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.912 07:05:11 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:04.912 07:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.912 07:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:04.912 [2024-05-12 07:05:11.793483] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:04.912 07:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.912 07:05:11 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:04.912 07:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.912 07:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:04.912 null0 00:25:04.912 07:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.912 07:05:11 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:04.912 07:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.912 07:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:04.912 null1 00:25:04.912 07:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.912 07:05:11 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:04.912 07:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.912 07:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:04.912 07:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.912 07:05:11 -- host/discovery.sh@45 -- # hostpid=3128958 00:25:04.912 07:05:11 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:04.912 07:05:11 -- host/discovery.sh@46 -- # waitforlisten 3128958 /tmp/host.sock 00:25:04.912 07:05:11 -- common/autotest_common.sh@819 -- # '[' -z 3128958 ']' 00:25:04.912 07:05:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:25:04.912 07:05:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:04.912 07:05:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:04.912 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:04.912 07:05:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:04.912 07:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:04.912 [2024-05-12 07:05:11.861958] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:04.912 [2024-05-12 07:05:11.862065] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128958 ] 00:25:04.912 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.912 [2024-05-12 07:05:11.923376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.912 [2024-05-12 07:05:12.036774] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:04.912 [2024-05-12 07:05:12.036960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.845 07:05:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:05.845 07:05:12 -- common/autotest_common.sh@852 -- # return 0 00:25:05.845 07:05:12 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.845 07:05:12 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:05.845 07:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.845 07:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:05.845 07:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.845 07:05:12 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:05.845 07:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.845 07:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:05.845 07:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.845 07:05:12 -- host/discovery.sh@72 -- # notify_id=0 00:25:05.845 07:05:12 -- host/discovery.sh@78 -- # get_subsystem_names 00:25:05.845 07:05:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.845 07:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.845 07:05:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.845 07:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:05.845 07:05:12 -- host/discovery.sh@59 -- # sort 00:25:05.845 07:05:12 -- host/discovery.sh@59 -- # xargs 00:25:05.845 07:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.845 07:05:12 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:25:05.845 07:05:12 -- host/discovery.sh@79 -- # get_bdev_list 00:25:05.845 07:05:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.845 07:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.845 07:05:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.845 07:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:05.845 07:05:12 -- host/discovery.sh@55 -- # sort 00:25:05.845 07:05:12 -- host/discovery.sh@55 -- # xargs 00:25:05.845 07:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.845 07:05:12 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:25:05.845 07:05:12 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:05.845 07:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.845 07:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:05.845 07:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.845 07:05:12 -- host/discovery.sh@82 -- # get_subsystem_names 00:25:05.845 07:05:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.845 07:05:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.845 07:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.845 07:05:12 -- host/discovery.sh@59 -- # sort 00:25:05.845 07:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:05.845 07:05:12 -- host/discovery.sh@59 -- # xargs 00:25:05.845 07:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.845 07:05:12 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:25:05.845 07:05:12 -- host/discovery.sh@83 -- # get_bdev_list 00:25:05.845 07:05:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.845 07:05:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.845 07:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.845 07:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:05.845 07:05:12 -- host/discovery.sh@55 -- # sort 00:25:05.845 07:05:12 -- host/discovery.sh@55 -- # xargs 00:25:05.845 07:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.104 07:05:12 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:06.104 07:05:12 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:06.104 07:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.104 07:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:06.104 07:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.104 07:05:12 -- host/discovery.sh@86 -- # get_subsystem_names 00:25:06.104 07:05:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.104 07:05:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.104 07:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.104 07:05:12 -- host/discovery.sh@59 -- # sort 00:25:06.104 07:05:12 -- common/autotest_common.sh@10 -- # set +x 00:25:06.104 07:05:12 -- host/discovery.sh@59 -- # xargs 00:25:06.104 07:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.104 07:05:13 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:25:06.104 07:05:13 -- host/discovery.sh@87 -- # get_bdev_list 00:25:06.104 07:05:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.104 07:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.104 07:05:13 -- common/autotest_common.sh@10 -- # set +x 00:25:06.104 07:05:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.104 07:05:13 -- host/discovery.sh@55 -- # sort 00:25:06.104 07:05:13 -- host/discovery.sh@55 -- # xargs 00:25:06.104 07:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.104 07:05:13 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:06.104 07:05:13 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.104 07:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.104 07:05:13 -- common/autotest_common.sh@10 -- # set +x 00:25:06.104 [2024-05-12 07:05:13.077023] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.104 07:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.104 07:05:13 -- host/discovery.sh@92 -- # get_subsystem_names 00:25:06.104 07:05:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.104 07:05:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.104 07:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.104 07:05:13 -- common/autotest_common.sh@10 -- # set +x 00:25:06.104 07:05:13 -- host/discovery.sh@59 -- # sort 00:25:06.105 07:05:13 -- host/discovery.sh@59 -- # xargs 00:25:06.105 07:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.105 07:05:13 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:06.105 07:05:13 -- host/discovery.sh@93 -- # get_bdev_list 00:25:06.105 07:05:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.105 07:05:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.105 07:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.105 07:05:13 -- common/autotest_common.sh@10 -- # set +x 00:25:06.105 07:05:13 -- host/discovery.sh@55 -- # sort 00:25:06.105 07:05:13 -- host/discovery.sh@55 -- # xargs 00:25:06.105 07:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.105 07:05:13 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:25:06.105 07:05:13 -- host/discovery.sh@94 -- # get_notification_count 00:25:06.105 07:05:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:06.105 07:05:13 -- host/discovery.sh@74 -- # jq '. | length' 00:25:06.105 07:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.105 07:05:13 -- common/autotest_common.sh@10 -- # set +x 00:25:06.105 07:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.105 07:05:13 -- host/discovery.sh@74 -- # notification_count=0 00:25:06.105 07:05:13 -- host/discovery.sh@75 -- # notify_id=0 00:25:06.105 07:05:13 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:25:06.105 07:05:13 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:06.105 07:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.105 07:05:13 -- common/autotest_common.sh@10 -- # set +x 00:25:06.105 07:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.105 07:05:13 -- host/discovery.sh@100 -- # sleep 1 00:25:07.043 [2024-05-12 07:05:13.876921] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:07.043 [2024-05-12 07:05:13.876947] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:07.043 [2024-05-12 07:05:13.876994] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:07.043 [2024-05-12 07:05:13.963300] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:07.043 [2024-05-12 07:05:14.065204] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:07.043 [2024-05-12 07:05:14.065231] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:07.303 07:05:14 -- host/discovery.sh@101 -- # get_subsystem_names 00:25:07.303 07:05:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:07.303 07:05:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:07.303 07:05:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.303 07:05:14 -- common/autotest_common.sh@10 -- # set +x 00:25:07.303 07:05:14 -- host/discovery.sh@59 -- # sort 00:25:07.303 07:05:14 -- host/discovery.sh@59 -- # xargs 00:25:07.303 07:05:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.303 07:05:14 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.303 07:05:14 -- host/discovery.sh@102 -- # get_bdev_list 00:25:07.303 07:05:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.303 07:05:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:07.304 07:05:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.304 07:05:14 -- host/discovery.sh@55 -- # sort 00:25:07.304 07:05:14 -- common/autotest_common.sh@10 -- # set +x 00:25:07.304 07:05:14 -- host/discovery.sh@55 -- # xargs 00:25:07.304 07:05:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.304 07:05:14 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:07.304 07:05:14 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:25:07.304 07:05:14 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:07.304 07:05:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.304 07:05:14 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:07.304 07:05:14 -- common/autotest_common.sh@10 -- # set +x 00:25:07.304 07:05:14 -- host/discovery.sh@63 -- # sort -n 00:25:07.304 07:05:14 -- host/discovery.sh@63 -- # xargs 00:25:07.304 07:05:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.304 07:05:14 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:25:07.304 07:05:14 -- host/discovery.sh@104 -- # get_notification_count 00:25:07.304 07:05:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:07.304 07:05:14 -- host/discovery.sh@74 -- # jq '. | length' 00:25:07.304 07:05:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.304 07:05:14 -- common/autotest_common.sh@10 -- # set +x 00:25:07.304 07:05:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.304 07:05:14 -- host/discovery.sh@74 -- # notification_count=1 00:25:07.304 07:05:14 -- host/discovery.sh@75 -- # notify_id=1 00:25:07.304 07:05:14 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:25:07.304 07:05:14 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:07.304 07:05:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.304 07:05:14 -- common/autotest_common.sh@10 -- # set +x 00:25:07.304 07:05:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.304 07:05:14 -- host/discovery.sh@109 -- # sleep 1 00:25:08.684 07:05:15 -- host/discovery.sh@110 -- # get_bdev_list 00:25:08.684 07:05:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.684 07:05:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.684 07:05:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.684 07:05:15 -- common/autotest_common.sh@10 -- # set +x 00:25:08.684 07:05:15 -- host/discovery.sh@55 -- # sort 00:25:08.684 07:05:15 -- host/discovery.sh@55 -- # xargs 00:25:08.684 07:05:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.684 07:05:15 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:08.684 07:05:15 -- host/discovery.sh@111 -- # get_notification_count 00:25:08.684 07:05:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:08.684 07:05:15 -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.684 07:05:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.684 07:05:15 -- common/autotest_common.sh@10 -- # set +x 00:25:08.684 07:05:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.684 07:05:15 -- host/discovery.sh@74 -- # notification_count=1 00:25:08.684 07:05:15 -- host/discovery.sh@75 -- # notify_id=2 00:25:08.684 07:05:15 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:25:08.684 07:05:15 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:08.684 07:05:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.685 07:05:15 -- common/autotest_common.sh@10 -- # set +x 00:25:08.685 [2024-05-12 07:05:15.488437] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:08.685 [2024-05-12 07:05:15.489658] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:08.685 [2024-05-12 07:05:15.489714] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:08.685 07:05:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.685 07:05:15 -- host/discovery.sh@117 -- # sleep 1 00:25:08.685 [2024-05-12 07:05:15.576922] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:08.943 [2024-05-12 07:05:15.883371] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:08.943 [2024-05-12 07:05:15.883397] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:08.943 [2024-05-12 07:05:15.883409] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:09.551 07:05:16 -- host/discovery.sh@118 -- # get_subsystem_names 00:25:09.551 07:05:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.551 07:05:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.551 07:05:16 -- common/autotest_common.sh@10 -- # set +x 00:25:09.551 07:05:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.551 07:05:16 -- host/discovery.sh@59 -- # sort 00:25:09.551 07:05:16 -- host/discovery.sh@59 -- # xargs 00:25:09.551 07:05:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.551 07:05:16 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.551 07:05:16 -- host/discovery.sh@119 -- # get_bdev_list 00:25:09.551 07:05:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.551 07:05:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.551 07:05:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.551 07:05:16 -- common/autotest_common.sh@10 -- # set +x 00:25:09.551 07:05:16 -- host/discovery.sh@55 -- # sort 00:25:09.551 07:05:16 -- host/discovery.sh@55 -- # xargs 00:25:09.551 07:05:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.551 07:05:16 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:09.551 07:05:16 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:25:09.551 07:05:16 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:09.551 07:05:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.551 07:05:16 -- common/autotest_common.sh@10 -- # set +x 00:25:09.551 07:05:16 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:09.551 07:05:16 -- host/discovery.sh@63 -- # sort -n 00:25:09.551 07:05:16 -- host/discovery.sh@63 -- # xargs 00:25:09.551 07:05:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.551 07:05:16 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:09.551 07:05:16 -- host/discovery.sh@121 -- # get_notification_count 00:25:09.551 07:05:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:09.551 07:05:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.551 07:05:16 -- host/discovery.sh@74 -- # jq '. | length' 00:25:09.552 07:05:16 -- common/autotest_common.sh@10 -- # set +x 00:25:09.552 07:05:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.552 07:05:16 -- host/discovery.sh@74 -- # notification_count=0 00:25:09.552 07:05:16 -- host/discovery.sh@75 -- # notify_id=2 00:25:09.552 07:05:16 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:25:09.552 07:05:16 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:09.552 07:05:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.552 07:05:16 -- common/autotest_common.sh@10 -- # set +x 00:25:09.552 [2024-05-12 07:05:16.656342] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:09.552 [2024-05-12 07:05:16.656371] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:09.552 [2024-05-12 07:05:16.657751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.552 [2024-05-12 07:05:16.657781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.552 [2024-05-12 07:05:16.657807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.552 [2024-05-12 07:05:16.657821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.552 [2024-05-12 07:05:16.657835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.552 [2024-05-12 07:05:16.657848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.552 [2024-05-12 07:05:16.657862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.552 [2024-05-12 07:05:16.657875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.552 [2024-05-12 07:05:16.657900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdb70 is same with the state(5) to be set 00:25:09.552 07:05:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.552 07:05:16 -- host/discovery.sh@127 -- # sleep 1 00:25:09.552 [2024-05-12 07:05:16.667746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdb70 (9): Bad file descriptor 00:25:09.552 [2024-05-12 07:05:16.677790] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.552 [2024-05-12 07:05:16.678043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.552 [2024-05-12 07:05:16.678251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.552 [2024-05-12 07:05:16.678280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fdb70 with addr=10.0.0.2, port=4420 00:25:09.552 [2024-05-12 07:05:16.678323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdb70 is same with the state(5) to be set 00:25:09.552 [2024-05-12 07:05:16.678349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdb70 (9): Bad file descriptor 00:25:09.552 [2024-05-12 07:05:16.678373] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.552 [2024-05-12 07:05:16.678390] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.552 [2024-05-12 07:05:16.678406] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.552 [2024-05-12 07:05:16.678428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.811 [2024-05-12 07:05:16.687873] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.811 [2024-05-12 07:05:16.688132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.811 [2024-05-12 07:05:16.688311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.811 [2024-05-12 07:05:16.688342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fdb70 with addr=10.0.0.2, port=4420 00:25:09.811 [2024-05-12 07:05:16.688359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdb70 is same with the state(5) to be set 00:25:09.811 [2024-05-12 07:05:16.688384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdb70 (9): Bad file descriptor 00:25:09.811 [2024-05-12 07:05:16.688408] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.811 [2024-05-12 07:05:16.688424] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.812 [2024-05-12 07:05:16.688439] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.812 [2024-05-12 07:05:16.688460] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.812 [2024-05-12 07:05:16.697944] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.812 [2024-05-12 07:05:16.698125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.698297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.698323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fdb70 with addr=10.0.0.2, port=4420 00:25:09.812 [2024-05-12 07:05:16.698339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdb70 is same with the state(5) to be set 00:25:09.812 [2024-05-12 07:05:16.698361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdb70 (9): Bad file descriptor 00:25:09.812 [2024-05-12 07:05:16.698382] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.812 [2024-05-12 07:05:16.698396] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.812 [2024-05-12 07:05:16.698408] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.812 [2024-05-12 07:05:16.698427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.812 [2024-05-12 07:05:16.708028] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.812 [2024-05-12 07:05:16.708281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.708486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.708522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fdb70 with addr=10.0.0.2, port=4420 00:25:09.812 [2024-05-12 07:05:16.708539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdb70 is same with the state(5) to be set 00:25:09.812 [2024-05-12 07:05:16.708578] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdb70 (9): Bad file descriptor 00:25:09.812 [2024-05-12 07:05:16.708612] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.812 [2024-05-12 07:05:16.708630] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.812 [2024-05-12 07:05:16.708644] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.812 [2024-05-12 07:05:16.708678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.812 [2024-05-12 07:05:16.718100] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.812 [2024-05-12 07:05:16.718365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.718581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.718607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fdb70 with addr=10.0.0.2, port=4420 00:25:09.812 [2024-05-12 07:05:16.718622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdb70 is same with the state(5) to be set 00:25:09.812 [2024-05-12 07:05:16.718662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdb70 (9): Bad file descriptor 00:25:09.812 [2024-05-12 07:05:16.718707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.812 [2024-05-12 07:05:16.718729] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.812 [2024-05-12 07:05:16.718744] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.812 [2024-05-12 07:05:16.718765] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.812 [2024-05-12 07:05:16.728177] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.812 [2024-05-12 07:05:16.728396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.728617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.728645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fdb70 with addr=10.0.0.2, port=4420 00:25:09.812 [2024-05-12 07:05:16.728662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdb70 is same with the state(5) to be set 00:25:09.812 [2024-05-12 07:05:16.728686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdb70 (9): Bad file descriptor 00:25:09.812 [2024-05-12 07:05:16.728719] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.812 [2024-05-12 07:05:16.728748] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.812 [2024-05-12 07:05:16.728761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.812 [2024-05-12 07:05:16.728807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.812 [2024-05-12 07:05:16.738251] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.812 [2024-05-12 07:05:16.738495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.738710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.738737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fdb70 with addr=10.0.0.2, port=4420 00:25:09.812 [2024-05-12 07:05:16.738758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdb70 is same with the state(5) to be set 00:25:09.812 [2024-05-12 07:05:16.738781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdb70 (9): Bad file descriptor 00:25:09.812 [2024-05-12 07:05:16.738815] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.812 [2024-05-12 07:05:16.738850] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.812 [2024-05-12 07:05:16.738863] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.812 [2024-05-12 07:05:16.738882] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.812 [2024-05-12 07:05:16.748328] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.812 [2024-05-12 07:05:16.748554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.748770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.748797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fdb70 with addr=10.0.0.2, port=4420 00:25:09.812 [2024-05-12 07:05:16.748813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdb70 is same with the state(5) to be set 00:25:09.812 [2024-05-12 07:05:16.748836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdb70 (9): Bad file descriptor 00:25:09.812 [2024-05-12 07:05:16.748872] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.812 [2024-05-12 07:05:16.748891] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.812 [2024-05-12 07:05:16.748904] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.812 [2024-05-12 07:05:16.748950] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.812 [2024-05-12 07:05:16.758407] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.812 [2024-05-12 07:05:16.758637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.758831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.758858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fdb70 with addr=10.0.0.2, port=4420 00:25:09.812 [2024-05-12 07:05:16.758874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdb70 is same with the state(5) to be set 00:25:09.812 [2024-05-12 07:05:16.758896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdb70 (9): Bad file descriptor 00:25:09.812 [2024-05-12 07:05:16.758930] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.812 [2024-05-12 07:05:16.758949] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.812 [2024-05-12 07:05:16.758962] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.812 [2024-05-12 07:05:16.758996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.812 [2024-05-12 07:05:16.768485] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.812 [2024-05-12 07:05:16.768748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.768930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.768957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fdb70 with addr=10.0.0.2, port=4420 00:25:09.812 [2024-05-12 07:05:16.768973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdb70 is same with the state(5) to be set 00:25:09.812 [2024-05-12 07:05:16.769111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdb70 (9): Bad file descriptor 00:25:09.812 [2024-05-12 07:05:16.769164] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.812 [2024-05-12 07:05:16.769186] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.812 [2024-05-12 07:05:16.769201] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.812 [2024-05-12 07:05:16.769222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.812 [2024-05-12 07:05:16.778561] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.812 [2024-05-12 07:05:16.778817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.778971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.812 [2024-05-12 07:05:16.779022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fdb70 with addr=10.0.0.2, port=4420 00:25:09.812 [2024-05-12 07:05:16.779039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdb70 is same with the state(5) to be set 00:25:09.812 [2024-05-12 07:05:16.779064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdb70 (9): Bad file descriptor 00:25:09.812 [2024-05-12 07:05:16.779087] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.812 [2024-05-12 07:05:16.779103] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.812 [2024-05-12 07:05:16.779118] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.812 [2024-05-12 07:05:16.779140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.812 [2024-05-12 07:05:16.784203] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:09.812 [2024-05-12 07:05:16.784249] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:10.751 07:05:17 -- host/discovery.sh@128 -- # get_subsystem_names 00:25:10.751 07:05:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.751 07:05:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.751 07:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.751 07:05:17 -- host/discovery.sh@59 -- # sort 00:25:10.751 07:05:17 -- common/autotest_common.sh@10 -- # set +x 00:25:10.751 07:05:17 -- host/discovery.sh@59 -- # xargs 00:25:10.751 07:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.751 07:05:17 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.751 07:05:17 -- host/discovery.sh@129 -- # get_bdev_list 00:25:10.751 07:05:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.751 07:05:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.751 07:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.751 07:05:17 -- common/autotest_common.sh@10 -- # set +x 00:25:10.751 07:05:17 -- host/discovery.sh@55 -- # sort 00:25:10.751 07:05:17 -- host/discovery.sh@55 -- # xargs 00:25:10.751 07:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.751 07:05:17 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:10.751 07:05:17 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:25:10.751 07:05:17 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:10.751 07:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.751 07:05:17 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:10.751 07:05:17 -- common/autotest_common.sh@10 -- # set +x 00:25:10.751 07:05:17 -- host/discovery.sh@63 -- # sort -n 00:25:10.751 07:05:17 -- host/discovery.sh@63 -- # xargs 00:25:10.751 07:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.751 07:05:17 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:25:10.751 07:05:17 -- host/discovery.sh@131 -- # get_notification_count 00:25:10.751 07:05:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:10.751 07:05:17 -- host/discovery.sh@74 -- # jq '. | length' 00:25:10.751 07:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.751 07:05:17 -- common/autotest_common.sh@10 -- # set +x 00:25:10.751 07:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.751 07:05:17 -- host/discovery.sh@74 -- # notification_count=0 00:25:10.751 07:05:17 -- host/discovery.sh@75 -- # notify_id=2 00:25:10.751 07:05:17 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:25:10.751 07:05:17 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:10.751 07:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.751 07:05:17 -- common/autotest_common.sh@10 -- # set +x 00:25:10.751 07:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.751 07:05:17 -- host/discovery.sh@135 -- # sleep 1 00:25:12.131 07:05:18 -- host/discovery.sh@136 -- # get_subsystem_names 00:25:12.131 07:05:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:12.131 07:05:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:12.131 07:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.131 07:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:12.131 07:05:18 -- host/discovery.sh@59 -- # sort 00:25:12.131 07:05:18 -- host/discovery.sh@59 -- # xargs 00:25:12.131 07:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.131 07:05:18 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:25:12.131 07:05:18 -- host/discovery.sh@137 -- # get_bdev_list 00:25:12.131 07:05:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.131 07:05:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:12.131 07:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.131 07:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:12.131 07:05:18 -- host/discovery.sh@55 -- # sort 00:25:12.131 07:05:18 -- host/discovery.sh@55 -- # xargs 00:25:12.131 07:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.131 07:05:18 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:25:12.131 07:05:18 -- host/discovery.sh@138 -- # get_notification_count 00:25:12.131 07:05:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:12.131 07:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.131 07:05:18 -- host/discovery.sh@74 -- # jq '. | length' 00:25:12.131 07:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:12.131 07:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.131 07:05:18 -- host/discovery.sh@74 -- # notification_count=2 00:25:12.131 07:05:18 -- host/discovery.sh@75 -- # notify_id=4 00:25:12.131 07:05:18 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:25:12.131 07:05:18 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:12.131 07:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.131 07:05:18 -- common/autotest_common.sh@10 -- # set +x 00:25:13.066 [2024-05-12 07:05:20.041993] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:13.066 [2024-05-12 07:05:20.042035] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:13.066 [2024-05-12 07:05:20.042088] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:13.066 [2024-05-12 07:05:20.129358] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:13.324 [2024-05-12 07:05:20.398363] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:13.324 [2024-05-12 07:05:20.398417] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:13.324 07:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.324 07:05:20 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:13.324 07:05:20 -- common/autotest_common.sh@640 -- # local es=0 00:25:13.324 07:05:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:13.324 07:05:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:13.324 07:05:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:13.324 07:05:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:13.324 07:05:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:13.324 07:05:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:13.324 07:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.324 07:05:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.324 request: 00:25:13.324 { 00:25:13.324 "name": "nvme", 00:25:13.324 "trtype": "tcp", 00:25:13.324 "traddr": "10.0.0.2", 00:25:13.324 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:13.324 "adrfam": "ipv4", 00:25:13.324 "trsvcid": "8009", 00:25:13.324 "wait_for_attach": true, 00:25:13.324 "method": "bdev_nvme_start_discovery", 00:25:13.324 "req_id": 1 00:25:13.324 } 00:25:13.324 Got JSON-RPC error response 00:25:13.324 response: 00:25:13.324 { 00:25:13.324 "code": -17, 00:25:13.324 "message": "File exists" 00:25:13.324 } 00:25:13.324 07:05:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:13.324 07:05:20 -- common/autotest_common.sh@643 -- # es=1 00:25:13.324 07:05:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:13.324 07:05:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:13.324 07:05:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:13.324 07:05:20 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:25:13.324 07:05:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:13.324 07:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.324 07:05:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:13.324 07:05:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.324 07:05:20 -- host/discovery.sh@67 -- # sort 00:25:13.324 07:05:20 -- host/discovery.sh@67 -- # xargs 00:25:13.324 07:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.582 07:05:20 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:25:13.582 07:05:20 -- host/discovery.sh@147 -- # get_bdev_list 00:25:13.582 07:05:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.582 07:05:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.582 07:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.582 07:05:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.582 07:05:20 -- host/discovery.sh@55 -- # sort 00:25:13.582 07:05:20 -- host/discovery.sh@55 -- # xargs 00:25:13.582 07:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.582 07:05:20 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:13.582 07:05:20 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:13.582 07:05:20 -- common/autotest_common.sh@640 -- # local es=0 00:25:13.582 07:05:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:13.582 07:05:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:13.582 07:05:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:13.582 07:05:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:13.582 07:05:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:13.582 07:05:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:13.582 07:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.582 07:05:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.582 request: 00:25:13.582 { 00:25:13.582 "name": "nvme_second", 00:25:13.582 "trtype": "tcp", 00:25:13.582 "traddr": "10.0.0.2", 00:25:13.582 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:13.582 "adrfam": "ipv4", 00:25:13.582 "trsvcid": "8009", 00:25:13.582 "wait_for_attach": true, 00:25:13.582 "method": "bdev_nvme_start_discovery", 00:25:13.582 "req_id": 1 00:25:13.582 } 00:25:13.582 Got JSON-RPC error response 00:25:13.582 response: 00:25:13.582 { 00:25:13.582 "code": -17, 00:25:13.582 "message": "File exists" 00:25:13.582 } 00:25:13.582 07:05:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:13.582 07:05:20 -- common/autotest_common.sh@643 -- # es=1 00:25:13.582 07:05:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:13.582 07:05:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:13.582 07:05:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:13.582 07:05:20 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:25:13.582 07:05:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:13.582 07:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.582 07:05:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:13.582 07:05:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.582 07:05:20 -- host/discovery.sh@67 -- # sort 00:25:13.582 07:05:20 -- host/discovery.sh@67 -- # xargs 00:25:13.582 07:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.582 07:05:20 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:25:13.582 07:05:20 -- host/discovery.sh@153 -- # get_bdev_list 00:25:13.582 07:05:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.582 07:05:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.582 07:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.582 07:05:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.582 07:05:20 -- host/discovery.sh@55 -- # sort 00:25:13.582 07:05:20 -- host/discovery.sh@55 -- # xargs 00:25:13.582 07:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.582 07:05:20 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:13.582 07:05:20 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:13.582 07:05:20 -- common/autotest_common.sh@640 -- # local es=0 00:25:13.582 07:05:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:13.582 07:05:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:13.582 07:05:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:13.582 07:05:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:13.582 07:05:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:13.582 07:05:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:13.582 07:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.582 07:05:20 -- common/autotest_common.sh@10 -- # set +x 00:25:14.517 [2024-05-12 07:05:21.601809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.517 [2024-05-12 07:05:21.601986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.517 [2024-05-12 07:05:21.602013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146edf0 with addr=10.0.0.2, port=8010 00:25:14.517 [2024-05-12 07:05:21.602034] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:14.517 [2024-05-12 07:05:21.602049] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:14.517 [2024-05-12 07:05:21.602061] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:15.890 [2024-05-12 07:05:22.604221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.890 [2024-05-12 07:05:22.604443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.890 [2024-05-12 07:05:22.604471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146edf0 with addr=10.0.0.2, port=8010 00:25:15.890 [2024-05-12 07:05:22.604490] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:15.890 [2024-05-12 07:05:22.604503] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:15.890 [2024-05-12 07:05:22.604515] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:16.842 [2024-05-12 07:05:23.606459] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:16.842 request: 00:25:16.842 { 00:25:16.842 "name": "nvme_second", 00:25:16.842 "trtype": "tcp", 00:25:16.842 "traddr": "10.0.0.2", 00:25:16.842 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:16.842 "adrfam": "ipv4", 00:25:16.842 "trsvcid": "8010", 00:25:16.842 "attach_timeout_ms": 3000, 00:25:16.842 "method": "bdev_nvme_start_discovery", 00:25:16.842 "req_id": 1 00:25:16.842 } 00:25:16.842 Got JSON-RPC error response 00:25:16.842 response: 00:25:16.842 { 00:25:16.842 "code": -110, 00:25:16.842 "message": "Connection timed out" 00:25:16.842 } 00:25:16.842 07:05:23 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:16.842 07:05:23 -- common/autotest_common.sh@643 -- # es=1 00:25:16.842 07:05:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:16.842 07:05:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:16.842 07:05:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:16.842 07:05:23 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:25:16.842 07:05:23 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:16.842 07:05:23 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:16.842 07:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.842 07:05:23 -- common/autotest_common.sh@10 -- # set +x 00:25:16.842 07:05:23 -- host/discovery.sh@67 -- # sort 00:25:16.842 07:05:23 -- host/discovery.sh@67 -- # xargs 00:25:16.842 07:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.842 07:05:23 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:25:16.842 07:05:23 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:25:16.842 07:05:23 -- host/discovery.sh@162 -- # kill 3128958 00:25:16.842 07:05:23 -- host/discovery.sh@163 -- # nvmftestfini 00:25:16.842 07:05:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:16.842 07:05:23 -- nvmf/common.sh@116 -- # sync 00:25:16.842 07:05:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:16.842 07:05:23 -- nvmf/common.sh@119 -- # set +e 00:25:16.842 07:05:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:16.842 07:05:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:16.842 rmmod nvme_tcp 00:25:16.842 rmmod nvme_fabrics 00:25:16.842 rmmod nvme_keyring 00:25:16.842 07:05:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:16.842 07:05:23 -- nvmf/common.sh@123 -- # set -e 00:25:16.842 07:05:23 -- nvmf/common.sh@124 -- # return 0 00:25:16.842 07:05:23 -- nvmf/common.sh@477 -- # '[' -n 3128807 ']' 00:25:16.842 07:05:23 -- nvmf/common.sh@478 -- # killprocess 3128807 00:25:16.842 07:05:23 -- common/autotest_common.sh@926 -- # '[' -z 3128807 ']' 00:25:16.842 07:05:23 -- common/autotest_common.sh@930 -- # kill -0 3128807 00:25:16.842 07:05:23 -- common/autotest_common.sh@931 -- # uname 00:25:16.842 07:05:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:16.842 07:05:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3128807 00:25:16.842 07:05:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:16.842 07:05:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:16.842 07:05:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3128807' 00:25:16.842 killing process with pid 3128807 00:25:16.842 07:05:23 -- common/autotest_common.sh@945 -- # kill 3128807 00:25:16.842 07:05:23 -- common/autotest_common.sh@950 -- # wait 3128807 00:25:17.102 07:05:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:17.102 07:05:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:17.102 07:05:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:17.102 07:05:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.102 07:05:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:17.102 07:05:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.102 07:05:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.102 07:05:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.010 07:05:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:19.010 00:25:19.010 real 0m17.408s 00:25:19.010 user 0m27.140s 00:25:19.010 sys 0m2.909s 00:25:19.010 07:05:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.010 07:05:26 -- common/autotest_common.sh@10 -- # set +x 00:25:19.010 ************************************ 00:25:19.010 END TEST nvmf_discovery 00:25:19.010 ************************************ 00:25:19.010 07:05:26 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:19.010 07:05:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:19.010 07:05:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:19.010 07:05:26 -- common/autotest_common.sh@10 -- # set +x 00:25:19.010 ************************************ 00:25:19.010 START TEST nvmf_discovery_remove_ifc 00:25:19.010 ************************************ 00:25:19.010 07:05:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:19.268 * Looking for test storage... 00:25:19.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:19.268 07:05:26 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.268 07:05:26 -- nvmf/common.sh@7 -- # uname -s 00:25:19.268 07:05:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.268 07:05:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.268 07:05:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.268 07:05:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.268 07:05:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.268 07:05:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.268 07:05:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.268 07:05:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.268 07:05:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.268 07:05:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.268 07:05:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.268 07:05:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.268 07:05:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.268 07:05:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.268 07:05:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.268 07:05:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.268 07:05:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.268 07:05:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.268 07:05:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.268 07:05:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.268 07:05:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.268 07:05:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.268 07:05:26 -- paths/export.sh@5 -- # export PATH 00:25:19.268 07:05:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.268 07:05:26 -- nvmf/common.sh@46 -- # : 0 00:25:19.268 07:05:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:19.268 07:05:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:19.268 07:05:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:19.268 07:05:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.269 07:05:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.269 07:05:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:19.269 07:05:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:19.269 07:05:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:19.269 07:05:26 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:19.269 07:05:26 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:19.269 07:05:26 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:19.269 07:05:26 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:19.269 07:05:26 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:19.269 07:05:26 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:19.269 07:05:26 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:19.269 07:05:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:19.269 07:05:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.269 07:05:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:19.269 07:05:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:19.269 07:05:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:19.269 07:05:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.269 07:05:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.269 07:05:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.269 07:05:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:19.269 07:05:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:19.269 07:05:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:19.269 07:05:26 -- common/autotest_common.sh@10 -- # set +x 00:25:21.174 07:05:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:21.174 07:05:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:21.174 07:05:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:21.174 07:05:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:21.174 07:05:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:21.174 07:05:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:21.174 07:05:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:21.174 07:05:28 -- nvmf/common.sh@294 -- # net_devs=() 00:25:21.174 07:05:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:21.174 07:05:28 -- nvmf/common.sh@295 -- # e810=() 00:25:21.174 07:05:28 -- nvmf/common.sh@295 -- # local -ga e810 00:25:21.174 07:05:28 -- nvmf/common.sh@296 -- # x722=() 00:25:21.174 07:05:28 -- nvmf/common.sh@296 -- # local -ga x722 00:25:21.174 07:05:28 -- nvmf/common.sh@297 -- # mlx=() 00:25:21.174 07:05:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:21.175 07:05:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.175 07:05:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.175 07:05:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.175 07:05:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.175 07:05:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.175 07:05:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.175 07:05:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.175 07:05:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.175 07:05:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.175 07:05:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.175 07:05:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.175 07:05:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:21.175 07:05:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:21.175 07:05:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:21.175 07:05:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:21.175 07:05:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:21.175 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:21.175 07:05:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:21.175 07:05:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:21.175 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:21.175 07:05:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:21.175 07:05:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:21.175 07:05:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.175 07:05:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:21.175 07:05:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.175 07:05:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:21.175 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:21.175 07:05:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.175 07:05:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:21.175 07:05:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.175 07:05:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:21.175 07:05:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.175 07:05:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:21.175 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:21.175 07:05:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.175 07:05:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:21.175 07:05:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:21.175 07:05:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:21.175 07:05:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.175 07:05:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.175 07:05:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.175 07:05:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:21.175 07:05:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.175 07:05:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.175 07:05:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:21.175 07:05:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.175 07:05:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.175 07:05:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:21.175 07:05:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:21.175 07:05:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.175 07:05:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.175 07:05:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.175 07:05:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.175 07:05:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:21.175 07:05:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.175 07:05:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.175 07:05:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.175 07:05:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:21.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:25:21.175 00:25:21.175 --- 10.0.0.2 ping statistics --- 00:25:21.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.175 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:25:21.175 07:05:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:25:21.175 00:25:21.175 --- 10.0.0.1 ping statistics --- 00:25:21.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.175 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:25:21.175 07:05:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.175 07:05:28 -- nvmf/common.sh@410 -- # return 0 00:25:21.175 07:05:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:21.175 07:05:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.175 07:05:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:21.175 07:05:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.175 07:05:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:21.175 07:05:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:21.175 07:05:28 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:21.175 07:05:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:21.175 07:05:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:21.175 07:05:28 -- common/autotest_common.sh@10 -- # set +x 00:25:21.175 07:05:28 -- nvmf/common.sh@469 -- # nvmfpid=3132443 00:25:21.175 07:05:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:21.175 07:05:28 -- nvmf/common.sh@470 -- # waitforlisten 3132443 00:25:21.175 07:05:28 -- common/autotest_common.sh@819 -- # '[' -z 3132443 ']' 00:25:21.175 07:05:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.175 07:05:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:21.175 07:05:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.175 07:05:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:21.175 07:05:28 -- common/autotest_common.sh@10 -- # set +x 00:25:21.175 [2024-05-12 07:05:28.202396] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:21.175 [2024-05-12 07:05:28.202470] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.175 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.175 [2024-05-12 07:05:28.264856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.435 [2024-05-12 07:05:28.373530] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:21.435 [2024-05-12 07:05:28.373670] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.435 [2024-05-12 07:05:28.373713] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.435 [2024-05-12 07:05:28.373727] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.435 [2024-05-12 07:05:28.373778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.369 07:05:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:22.369 07:05:29 -- common/autotest_common.sh@852 -- # return 0 00:25:22.369 07:05:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:22.369 07:05:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:22.369 07:05:29 -- common/autotest_common.sh@10 -- # set +x 00:25:22.369 07:05:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.369 07:05:29 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:22.369 07:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.369 07:05:29 -- common/autotest_common.sh@10 -- # set +x 00:25:22.369 [2024-05-12 07:05:29.192768] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.369 [2024-05-12 07:05:29.200906] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:22.369 null0 00:25:22.369 [2024-05-12 07:05:29.232878] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.369 07:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.369 07:05:29 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3132599 00:25:22.369 07:05:29 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:22.369 07:05:29 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3132599 /tmp/host.sock 00:25:22.369 07:05:29 -- common/autotest_common.sh@819 -- # '[' -z 3132599 ']' 00:25:22.369 07:05:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:25:22.369 07:05:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:22.369 07:05:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:22.369 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:22.369 07:05:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:22.369 07:05:29 -- common/autotest_common.sh@10 -- # set +x 00:25:22.369 [2024-05-12 07:05:29.292093] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:22.369 [2024-05-12 07:05:29.292166] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132599 ] 00:25:22.369 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.369 [2024-05-12 07:05:29.354584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.369 [2024-05-12 07:05:29.467765] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:22.369 [2024-05-12 07:05:29.467940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.369 07:05:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:22.369 07:05:29 -- common/autotest_common.sh@852 -- # return 0 00:25:22.369 07:05:29 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.369 07:05:29 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:22.369 07:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.369 07:05:29 -- common/autotest_common.sh@10 -- # set +x 00:25:22.628 07:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.628 07:05:29 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:22.628 07:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.628 07:05:29 -- common/autotest_common.sh@10 -- # set +x 00:25:22.628 07:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.628 07:05:29 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:22.628 07:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.628 07:05:29 -- common/autotest_common.sh@10 -- # set +x 00:25:23.562 [2024-05-12 07:05:30.651787] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:23.562 [2024-05-12 07:05:30.651818] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:23.562 [2024-05-12 07:05:30.651840] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:23.820 [2024-05-12 07:05:30.779269] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:23.820 [2024-05-12 07:05:30.922129] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:23.820 [2024-05-12 07:05:30.922186] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:23.820 [2024-05-12 07:05:30.922229] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:23.820 [2024-05-12 07:05:30.922255] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:23.820 [2024-05-12 07:05:30.922291] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:23.820 07:05:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.820 07:05:30 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:23.820 07:05:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:23.820 07:05:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.820 07:05:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.820 07:05:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:23.820 07:05:30 -- common/autotest_common.sh@10 -- # set +x 00:25:23.820 07:05:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:23.820 07:05:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:23.820 [2024-05-12 07:05:30.929097] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f6d220 was disconnected and freed. delete nvme_qpair. 00:25:23.820 07:05:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.077 07:05:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:24.077 07:05:30 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:24.077 07:05:30 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:24.077 07:05:31 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:24.077 07:05:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.077 07:05:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.077 07:05:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.077 07:05:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.077 07:05:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.077 07:05:31 -- common/autotest_common.sh@10 -- # set +x 00:25:24.077 07:05:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.077 07:05:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.077 07:05:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:24.077 07:05:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.027 07:05:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.027 07:05:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.027 07:05:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.027 07:05:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.027 07:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.027 07:05:32 -- common/autotest_common.sh@10 -- # set +x 00:25:25.027 07:05:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.027 07:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.027 07:05:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:25.027 07:05:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:26.407 07:05:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:26.407 07:05:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.407 07:05:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.407 07:05:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:26.407 07:05:33 -- common/autotest_common.sh@10 -- # set +x 00:25:26.407 07:05:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:26.407 07:05:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:26.407 07:05:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:26.407 07:05:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:26.407 07:05:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:27.344 07:05:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.344 07:05:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.344 07:05:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.344 07:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.344 07:05:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.344 07:05:34 -- common/autotest_common.sh@10 -- # set +x 00:25:27.344 07:05:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.344 07:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.344 07:05:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:27.344 07:05:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:28.278 07:05:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:28.278 07:05:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.278 07:05:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:28.278 07:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.278 07:05:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:28.278 07:05:35 -- common/autotest_common.sh@10 -- # set +x 00:25:28.278 07:05:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.278 07:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.278 07:05:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:28.278 07:05:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.211 07:05:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:29.211 07:05:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.211 07:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.211 07:05:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:29.211 07:05:36 -- common/autotest_common.sh@10 -- # set +x 00:25:29.211 07:05:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:29.211 07:05:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:29.211 07:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.211 07:05:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:29.211 07:05:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.470 [2024-05-12 07:05:36.363221] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:29.470 [2024-05-12 07:05:36.363291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.470 [2024-05-12 07:05:36.363323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.470 [2024-05-12 07:05:36.363345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.470 [2024-05-12 07:05:36.363361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.470 [2024-05-12 07:05:36.363377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.470 [2024-05-12 07:05:36.363392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.470 [2024-05-12 07:05:36.363407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.470 [2024-05-12 07:05:36.363422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.470 [2024-05-12 07:05:36.363438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.470 [2024-05-12 07:05:36.363453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.470 [2024-05-12 07:05:36.363467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f33800 is same with the state(5) to be set 00:25:29.470 [2024-05-12 07:05:36.373239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f33800 (9): Bad file descriptor 00:25:29.470 [2024-05-12 07:05:36.383289] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:30.407 07:05:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:30.407 07:05:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.407 07:05:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:30.407 07:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.407 07:05:37 -- common/autotest_common.sh@10 -- # set +x 00:25:30.407 07:05:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:30.407 07:05:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:30.407 [2024-05-12 07:05:37.414735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:31.344 [2024-05-12 07:05:38.438722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:31.344 [2024-05-12 07:05:38.438780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33800 with addr=10.0.0.2, port=4420 00:25:31.344 [2024-05-12 07:05:38.438806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f33800 is same with the state(5) to be set 00:25:31.344 [2024-05-12 07:05:38.439259] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f33800 (9): Bad file descriptor 00:25:31.344 [2024-05-12 07:05:38.439304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.344 [2024-05-12 07:05:38.439345] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:31.344 [2024-05-12 07:05:38.439385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.344 [2024-05-12 07:05:38.439409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.344 [2024-05-12 07:05:38.439431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.344 [2024-05-12 07:05:38.439446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.344 [2024-05-12 07:05:38.439462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.344 [2024-05-12 07:05:38.439484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.344 [2024-05-12 07:05:38.439500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.344 [2024-05-12 07:05:38.439516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.344 [2024-05-12 07:05:38.439531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.344 [2024-05-12 07:05:38.439546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.344 [2024-05-12 07:05:38.439561] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:31.344 [2024-05-12 07:05:38.439843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f33c10 (9): Bad file descriptor 00:25:31.344 [2024-05-12 07:05:38.440861] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:31.344 [2024-05-12 07:05:38.440882] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:31.344 07:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.344 07:05:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:31.344 07:05:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.719 07:05:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.719 07:05:39 -- common/autotest_common.sh@10 -- # set +x 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.719 07:05:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.719 07:05:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.719 07:05:39 -- common/autotest_common.sh@10 -- # set +x 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.719 07:05:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:32.719 07:05:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:33.656 [2024-05-12 07:05:40.456908] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:33.656 [2024-05-12 07:05:40.456947] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:33.656 [2024-05-12 07:05:40.456969] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:33.656 07:05:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.656 07:05:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.656 07:05:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.656 07:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.656 07:05:40 -- common/autotest_common.sh@10 -- # set +x 00:25:33.656 07:05:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.656 07:05:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.656 [2024-05-12 07:05:40.585403] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:33.656 07:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.656 07:05:40 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:33.656 07:05:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:33.656 [2024-05-12 07:05:40.766991] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:33.656 [2024-05-12 07:05:40.767051] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:33.656 [2024-05-12 07:05:40.767090] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:33.656 [2024-05-12 07:05:40.767115] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:33.656 [2024-05-12 07:05:40.767129] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:33.656 [2024-05-12 07:05:40.775589] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f74ea0 was disconnected and freed. delete nvme_qpair. 00:25:34.591 07:05:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.591 07:05:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.591 07:05:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.591 07:05:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.591 07:05:41 -- common/autotest_common.sh@10 -- # set +x 00:25:34.591 07:05:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.591 07:05:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.591 07:05:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.591 07:05:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:34.591 07:05:41 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:34.591 07:05:41 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3132599 00:25:34.591 07:05:41 -- common/autotest_common.sh@926 -- # '[' -z 3132599 ']' 00:25:34.591 07:05:41 -- common/autotest_common.sh@930 -- # kill -0 3132599 00:25:34.591 07:05:41 -- common/autotest_common.sh@931 -- # uname 00:25:34.591 07:05:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:34.591 07:05:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3132599 00:25:34.591 07:05:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:34.591 07:05:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:34.591 07:05:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3132599' 00:25:34.591 killing process with pid 3132599 00:25:34.591 07:05:41 -- common/autotest_common.sh@945 -- # kill 3132599 00:25:34.591 07:05:41 -- common/autotest_common.sh@950 -- # wait 3132599 00:25:34.849 07:05:41 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:34.849 07:05:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:34.849 07:05:41 -- nvmf/common.sh@116 -- # sync 00:25:34.849 07:05:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:34.849 07:05:41 -- nvmf/common.sh@119 -- # set +e 00:25:34.849 07:05:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:34.849 07:05:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:34.849 rmmod nvme_tcp 00:25:34.849 rmmod nvme_fabrics 00:25:35.106 rmmod nvme_keyring 00:25:35.106 07:05:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:35.106 07:05:42 -- nvmf/common.sh@123 -- # set -e 00:25:35.106 07:05:42 -- nvmf/common.sh@124 -- # return 0 00:25:35.106 07:05:42 -- nvmf/common.sh@477 -- # '[' -n 3132443 ']' 00:25:35.106 07:05:42 -- nvmf/common.sh@478 -- # killprocess 3132443 00:25:35.106 07:05:42 -- common/autotest_common.sh@926 -- # '[' -z 3132443 ']' 00:25:35.106 07:05:42 -- common/autotest_common.sh@930 -- # kill -0 3132443 00:25:35.106 07:05:42 -- common/autotest_common.sh@931 -- # uname 00:25:35.106 07:05:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:35.106 07:05:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3132443 00:25:35.106 07:05:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:35.106 07:05:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:35.106 07:05:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3132443' 00:25:35.106 killing process with pid 3132443 00:25:35.106 07:05:42 -- common/autotest_common.sh@945 -- # kill 3132443 00:25:35.106 07:05:42 -- common/autotest_common.sh@950 -- # wait 3132443 00:25:35.364 07:05:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:35.364 07:05:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:35.364 07:05:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:35.364 07:05:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.364 07:05:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:35.364 07:05:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.364 07:05:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.364 07:05:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.267 07:05:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:37.267 00:25:37.267 real 0m18.223s 00:25:37.267 user 0m25.463s 00:25:37.267 sys 0m2.859s 00:25:37.267 07:05:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:37.267 07:05:44 -- common/autotest_common.sh@10 -- # set +x 00:25:37.267 ************************************ 00:25:37.267 END TEST nvmf_discovery_remove_ifc 00:25:37.267 ************************************ 00:25:37.267 07:05:44 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:25:37.267 07:05:44 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:37.267 07:05:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:37.267 07:05:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:37.267 07:05:44 -- common/autotest_common.sh@10 -- # set +x 00:25:37.267 ************************************ 00:25:37.267 START TEST nvmf_digest 00:25:37.267 ************************************ 00:25:37.267 07:05:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:37.267 * Looking for test storage... 00:25:37.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:37.267 07:05:44 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.267 07:05:44 -- nvmf/common.sh@7 -- # uname -s 00:25:37.267 07:05:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.267 07:05:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.267 07:05:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.267 07:05:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.525 07:05:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.525 07:05:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.525 07:05:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.525 07:05:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.525 07:05:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.525 07:05:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.525 07:05:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:37.525 07:05:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:37.525 07:05:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.525 07:05:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.525 07:05:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.525 07:05:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.525 07:05:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.525 07:05:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.525 07:05:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.525 07:05:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.525 07:05:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.525 07:05:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.525 07:05:44 -- paths/export.sh@5 -- # export PATH 00:25:37.525 07:05:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.525 07:05:44 -- nvmf/common.sh@46 -- # : 0 00:25:37.525 07:05:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:37.525 07:05:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:37.525 07:05:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:37.525 07:05:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.525 07:05:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.525 07:05:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:37.525 07:05:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:37.525 07:05:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:37.525 07:05:44 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:37.525 07:05:44 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:37.525 07:05:44 -- host/digest.sh@16 -- # runtime=2 00:25:37.525 07:05:44 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:25:37.525 07:05:44 -- host/digest.sh@132 -- # nvmftestinit 00:25:37.525 07:05:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:37.525 07:05:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.525 07:05:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:37.525 07:05:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:37.525 07:05:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:37.525 07:05:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.525 07:05:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.525 07:05:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.525 07:05:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:37.525 07:05:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:37.525 07:05:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:37.525 07:05:44 -- common/autotest_common.sh@10 -- # set +x 00:25:39.426 07:05:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:39.426 07:05:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:39.426 07:05:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:39.426 07:05:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:39.426 07:05:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:39.426 07:05:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:39.426 07:05:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:39.426 07:05:46 -- nvmf/common.sh@294 -- # net_devs=() 00:25:39.426 07:05:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:39.426 07:05:46 -- nvmf/common.sh@295 -- # e810=() 00:25:39.426 07:05:46 -- nvmf/common.sh@295 -- # local -ga e810 00:25:39.426 07:05:46 -- nvmf/common.sh@296 -- # x722=() 00:25:39.426 07:05:46 -- nvmf/common.sh@296 -- # local -ga x722 00:25:39.426 07:05:46 -- nvmf/common.sh@297 -- # mlx=() 00:25:39.426 07:05:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:39.426 07:05:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.426 07:05:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.426 07:05:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.426 07:05:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.426 07:05:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.426 07:05:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.426 07:05:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.426 07:05:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.426 07:05:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.426 07:05:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.426 07:05:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.426 07:05:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:39.426 07:05:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:39.426 07:05:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:39.426 07:05:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:39.426 07:05:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:39.426 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:39.426 07:05:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:39.426 07:05:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:39.426 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:39.426 07:05:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:39.426 07:05:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:39.426 07:05:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:39.426 07:05:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.426 07:05:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:39.426 07:05:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.426 07:05:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:39.426 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:39.426 07:05:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.426 07:05:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:39.426 07:05:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.426 07:05:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:39.426 07:05:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.426 07:05:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:39.426 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:39.426 07:05:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.426 07:05:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:39.427 07:05:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:39.427 07:05:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:39.427 07:05:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:39.427 07:05:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:39.427 07:05:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.427 07:05:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.427 07:05:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.427 07:05:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:39.427 07:05:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.427 07:05:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.427 07:05:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:39.427 07:05:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.427 07:05:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.427 07:05:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:39.427 07:05:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:39.427 07:05:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.427 07:05:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.427 07:05:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.427 07:05:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.427 07:05:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:39.427 07:05:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.685 07:05:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.685 07:05:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.685 07:05:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:39.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:25:39.685 00:25:39.685 --- 10.0.0.2 ping statistics --- 00:25:39.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.685 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:25:39.685 07:05:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:25:39.685 00:25:39.685 --- 10.0.0.1 ping statistics --- 00:25:39.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.685 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:25:39.685 07:05:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.685 07:05:46 -- nvmf/common.sh@410 -- # return 0 00:25:39.685 07:05:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:39.685 07:05:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.685 07:05:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:39.685 07:05:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:39.685 07:05:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.685 07:05:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:39.685 07:05:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:39.685 07:05:46 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:39.685 07:05:46 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:25:39.685 07:05:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:39.685 07:05:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:39.685 07:05:46 -- common/autotest_common.sh@10 -- # set +x 00:25:39.685 ************************************ 00:25:39.685 START TEST nvmf_digest_clean 00:25:39.685 ************************************ 00:25:39.685 07:05:46 -- common/autotest_common.sh@1104 -- # run_digest 00:25:39.685 07:05:46 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:25:39.685 07:05:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:39.685 07:05:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:39.685 07:05:46 -- common/autotest_common.sh@10 -- # set +x 00:25:39.685 07:05:46 -- nvmf/common.sh@469 -- # nvmfpid=3136115 00:25:39.685 07:05:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:39.685 07:05:46 -- nvmf/common.sh@470 -- # waitforlisten 3136115 00:25:39.685 07:05:46 -- common/autotest_common.sh@819 -- # '[' -z 3136115 ']' 00:25:39.685 07:05:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.685 07:05:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:39.685 07:05:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.685 07:05:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:39.685 07:05:46 -- common/autotest_common.sh@10 -- # set +x 00:25:39.685 [2024-05-12 07:05:46.680802] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:39.685 [2024-05-12 07:05:46.680889] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.685 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.685 [2024-05-12 07:05:46.750686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.943 [2024-05-12 07:05:46.863684] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:39.943 [2024-05-12 07:05:46.863863] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.943 [2024-05-12 07:05:46.863884] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.943 [2024-05-12 07:05:46.863898] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.943 [2024-05-12 07:05:46.863935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.514 07:05:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:40.514 07:05:47 -- common/autotest_common.sh@852 -- # return 0 00:25:40.514 07:05:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:40.514 07:05:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:40.514 07:05:47 -- common/autotest_common.sh@10 -- # set +x 00:25:40.819 07:05:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.819 07:05:47 -- host/digest.sh@120 -- # common_target_config 00:25:40.819 07:05:47 -- host/digest.sh@43 -- # rpc_cmd 00:25:40.819 07:05:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.819 07:05:47 -- common/autotest_common.sh@10 -- # set +x 00:25:40.819 null0 00:25:40.819 [2024-05-12 07:05:47.779636] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.819 [2024-05-12 07:05:47.803855] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.819 07:05:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.819 07:05:47 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:25:40.819 07:05:47 -- host/digest.sh@77 -- # local rw bs qd 00:25:40.819 07:05:47 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:40.819 07:05:47 -- host/digest.sh@80 -- # rw=randread 00:25:40.819 07:05:47 -- host/digest.sh@80 -- # bs=4096 00:25:40.819 07:05:47 -- host/digest.sh@80 -- # qd=128 00:25:40.819 07:05:47 -- host/digest.sh@82 -- # bperfpid=3136271 00:25:40.819 07:05:47 -- host/digest.sh@83 -- # waitforlisten 3136271 /var/tmp/bperf.sock 00:25:40.819 07:05:47 -- common/autotest_common.sh@819 -- # '[' -z 3136271 ']' 00:25:40.819 07:05:47 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:40.819 07:05:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:40.819 07:05:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:40.819 07:05:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:40.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:40.819 07:05:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:40.819 07:05:47 -- common/autotest_common.sh@10 -- # set +x 00:25:40.819 [2024-05-12 07:05:47.848537] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:40.819 [2024-05-12 07:05:47.848612] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3136271 ] 00:25:40.819 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.819 [2024-05-12 07:05:47.909793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.078 [2024-05-12 07:05:48.026049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.078 07:05:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:41.078 07:05:48 -- common/autotest_common.sh@852 -- # return 0 00:25:41.078 07:05:48 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:25:41.078 07:05:48 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:25:41.078 07:05:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:41.336 07:05:48 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:41.336 07:05:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:41.905 nvme0n1 00:25:41.905 07:05:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:25:41.905 07:05:48 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:41.905 Running I/O for 2 seconds... 00:25:44.436 00:25:44.436 Latency(us) 00:25:44.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.436 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:44.436 nvme0n1 : 2.04 19270.97 75.28 0.00 0.00 6528.19 2548.62 44467.39 00:25:44.436 =================================================================================================================== 00:25:44.436 Total : 19270.97 75.28 0.00 0.00 6528.19 2548.62 44467.39 00:25:44.436 0 00:25:44.436 07:05:51 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:25:44.436 07:05:51 -- host/digest.sh@92 -- # get_accel_stats 00:25:44.436 07:05:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:44.436 07:05:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:44.436 | select(.opcode=="crc32c") 00:25:44.436 | "\(.module_name) \(.executed)"' 00:25:44.436 07:05:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:44.436 07:05:51 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:25:44.436 07:05:51 -- host/digest.sh@93 -- # exp_module=software 00:25:44.436 07:05:51 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:25:44.436 07:05:51 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:44.436 07:05:51 -- host/digest.sh@97 -- # killprocess 3136271 00:25:44.436 07:05:51 -- common/autotest_common.sh@926 -- # '[' -z 3136271 ']' 00:25:44.436 07:05:51 -- common/autotest_common.sh@930 -- # kill -0 3136271 00:25:44.436 07:05:51 -- common/autotest_common.sh@931 -- # uname 00:25:44.436 07:05:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:44.436 07:05:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3136271 00:25:44.436 07:05:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:44.436 07:05:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:44.436 07:05:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3136271' 00:25:44.436 killing process with pid 3136271 00:25:44.436 07:05:51 -- common/autotest_common.sh@945 -- # kill 3136271 00:25:44.436 Received shutdown signal, test time was about 2.000000 seconds 00:25:44.436 00:25:44.436 Latency(us) 00:25:44.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.436 =================================================================================================================== 00:25:44.436 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.436 07:05:51 -- common/autotest_common.sh@950 -- # wait 3136271 00:25:44.436 07:05:51 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:25:44.436 07:05:51 -- host/digest.sh@77 -- # local rw bs qd 00:25:44.436 07:05:51 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:44.436 07:05:51 -- host/digest.sh@80 -- # rw=randread 00:25:44.436 07:05:51 -- host/digest.sh@80 -- # bs=131072 00:25:44.436 07:05:51 -- host/digest.sh@80 -- # qd=16 00:25:44.436 07:05:51 -- host/digest.sh@82 -- # bperfpid=3136702 00:25:44.436 07:05:51 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:44.436 07:05:51 -- host/digest.sh@83 -- # waitforlisten 3136702 /var/tmp/bperf.sock 00:25:44.436 07:05:51 -- common/autotest_common.sh@819 -- # '[' -z 3136702 ']' 00:25:44.436 07:05:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:44.436 07:05:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:44.436 07:05:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:44.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:44.436 07:05:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:44.436 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:25:44.693 [2024-05-12 07:05:51.595772] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:44.694 [2024-05-12 07:05:51.595844] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3136702 ] 00:25:44.694 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:44.694 Zero copy mechanism will not be used. 00:25:44.694 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.694 [2024-05-12 07:05:51.660422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.694 [2024-05-12 07:05:51.776714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.952 07:05:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:44.952 07:05:51 -- common/autotest_common.sh@852 -- # return 0 00:25:44.952 07:05:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:25:44.952 07:05:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:25:44.952 07:05:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:45.210 07:05:52 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.210 07:05:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.467 nvme0n1 00:25:45.467 07:05:52 -- host/digest.sh@91 -- # bperf_py perform_tests 00:25:45.467 07:05:52 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:45.726 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:45.726 Zero copy mechanism will not be used. 00:25:45.726 Running I/O for 2 seconds... 00:25:47.626 00:25:47.626 Latency(us) 00:25:47.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.626 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:47.626 nvme0n1 : 2.00 2476.90 309.61 0.00 0.00 6454.88 6262.33 10048.85 00:25:47.626 =================================================================================================================== 00:25:47.626 Total : 2476.90 309.61 0.00 0.00 6454.88 6262.33 10048.85 00:25:47.626 0 00:25:47.626 07:05:54 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:25:47.626 07:05:54 -- host/digest.sh@92 -- # get_accel_stats 00:25:47.626 07:05:54 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:47.626 07:05:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:47.626 07:05:54 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:47.626 | select(.opcode=="crc32c") 00:25:47.626 | "\(.module_name) \(.executed)"' 00:25:47.884 07:05:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:25:47.884 07:05:54 -- host/digest.sh@93 -- # exp_module=software 00:25:47.884 07:05:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:25:47.884 07:05:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:47.884 07:05:54 -- host/digest.sh@97 -- # killprocess 3136702 00:25:47.884 07:05:54 -- common/autotest_common.sh@926 -- # '[' -z 3136702 ']' 00:25:47.884 07:05:54 -- common/autotest_common.sh@930 -- # kill -0 3136702 00:25:47.884 07:05:54 -- common/autotest_common.sh@931 -- # uname 00:25:47.884 07:05:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:47.884 07:05:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3136702 00:25:47.884 07:05:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:47.884 07:05:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:47.884 07:05:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3136702' 00:25:47.884 killing process with pid 3136702 00:25:47.884 07:05:54 -- common/autotest_common.sh@945 -- # kill 3136702 00:25:47.884 Received shutdown signal, test time was about 2.000000 seconds 00:25:47.884 00:25:47.884 Latency(us) 00:25:47.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.884 =================================================================================================================== 00:25:47.884 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:47.884 07:05:54 -- common/autotest_common.sh@950 -- # wait 3136702 00:25:48.142 07:05:55 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:25:48.142 07:05:55 -- host/digest.sh@77 -- # local rw bs qd 00:25:48.142 07:05:55 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:48.142 07:05:55 -- host/digest.sh@80 -- # rw=randwrite 00:25:48.142 07:05:55 -- host/digest.sh@80 -- # bs=4096 00:25:48.142 07:05:55 -- host/digest.sh@80 -- # qd=128 00:25:48.142 07:05:55 -- host/digest.sh@82 -- # bperfpid=3137233 00:25:48.142 07:05:55 -- host/digest.sh@83 -- # waitforlisten 3137233 /var/tmp/bperf.sock 00:25:48.142 07:05:55 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:48.142 07:05:55 -- common/autotest_common.sh@819 -- # '[' -z 3137233 ']' 00:25:48.142 07:05:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:48.142 07:05:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:48.142 07:05:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:48.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:48.142 07:05:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:48.142 07:05:55 -- common/autotest_common.sh@10 -- # set +x 00:25:48.142 [2024-05-12 07:05:55.219966] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:48.142 [2024-05-12 07:05:55.220062] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137233 ] 00:25:48.142 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.399 [2024-05-12 07:05:55.282164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.399 [2024-05-12 07:05:55.398173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.399 07:05:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:48.399 07:05:55 -- common/autotest_common.sh@852 -- # return 0 00:25:48.399 07:05:55 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:25:48.399 07:05:55 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:25:48.399 07:05:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:48.657 07:05:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:48.657 07:05:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.224 nvme0n1 00:25:49.224 07:05:56 -- host/digest.sh@91 -- # bperf_py perform_tests 00:25:49.224 07:05:56 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:49.224 Running I/O for 2 seconds... 00:25:51.755 00:25:51.755 Latency(us) 00:25:51.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.755 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:51.755 nvme0n1 : 2.01 20333.91 79.43 0.00 0.00 6282.15 2900.57 10388.67 00:25:51.755 =================================================================================================================== 00:25:51.755 Total : 20333.91 79.43 0.00 0.00 6282.15 2900.57 10388.67 00:25:51.755 0 00:25:51.755 07:05:58 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:25:51.755 07:05:58 -- host/digest.sh@92 -- # get_accel_stats 00:25:51.755 07:05:58 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:51.755 07:05:58 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:51.755 | select(.opcode=="crc32c") 00:25:51.755 | "\(.module_name) \(.executed)"' 00:25:51.755 07:05:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:51.755 07:05:58 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:25:51.755 07:05:58 -- host/digest.sh@93 -- # exp_module=software 00:25:51.755 07:05:58 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:25:51.755 07:05:58 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:51.755 07:05:58 -- host/digest.sh@97 -- # killprocess 3137233 00:25:51.755 07:05:58 -- common/autotest_common.sh@926 -- # '[' -z 3137233 ']' 00:25:51.755 07:05:58 -- common/autotest_common.sh@930 -- # kill -0 3137233 00:25:51.755 07:05:58 -- common/autotest_common.sh@931 -- # uname 00:25:51.755 07:05:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:51.755 07:05:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3137233 00:25:51.755 07:05:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:51.755 07:05:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:51.755 07:05:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3137233' 00:25:51.755 killing process with pid 3137233 00:25:51.755 07:05:58 -- common/autotest_common.sh@945 -- # kill 3137233 00:25:51.755 Received shutdown signal, test time was about 2.000000 seconds 00:25:51.755 00:25:51.755 Latency(us) 00:25:51.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.755 =================================================================================================================== 00:25:51.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:51.755 07:05:58 -- common/autotest_common.sh@950 -- # wait 3137233 00:25:51.755 07:05:58 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:25:51.755 07:05:58 -- host/digest.sh@77 -- # local rw bs qd 00:25:51.755 07:05:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:51.755 07:05:58 -- host/digest.sh@80 -- # rw=randwrite 00:25:51.755 07:05:58 -- host/digest.sh@80 -- # bs=131072 00:25:51.755 07:05:58 -- host/digest.sh@80 -- # qd=16 00:25:51.755 07:05:58 -- host/digest.sh@82 -- # bperfpid=3137655 00:25:51.755 07:05:58 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:51.755 07:05:58 -- host/digest.sh@83 -- # waitforlisten 3137655 /var/tmp/bperf.sock 00:25:51.755 07:05:58 -- common/autotest_common.sh@819 -- # '[' -z 3137655 ']' 00:25:51.755 07:05:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:51.755 07:05:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:51.755 07:05:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:51.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:51.755 07:05:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:51.755 07:05:58 -- common/autotest_common.sh@10 -- # set +x 00:25:51.755 [2024-05-12 07:05:58.881583] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:51.755 [2024-05-12 07:05:58.881677] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137655 ] 00:25:51.755 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:51.755 Zero copy mechanism will not be used. 00:25:52.014 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.014 [2024-05-12 07:05:58.940088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.014 [2024-05-12 07:05:59.047883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.014 07:05:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:52.014 07:05:59 -- common/autotest_common.sh@852 -- # return 0 00:25:52.014 07:05:59 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:25:52.014 07:05:59 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:25:52.014 07:05:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:52.582 07:05:59 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.582 07:05:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.841 nvme0n1 00:25:52.841 07:05:59 -- host/digest.sh@91 -- # bperf_py perform_tests 00:25:52.841 07:05:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:52.841 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:52.841 Zero copy mechanism will not be used. 00:25:52.841 Running I/O for 2 seconds... 00:25:55.403 00:25:55.403 Latency(us) 00:25:55.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.403 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:55.403 nvme0n1 : 2.01 1722.25 215.28 0.00 0.00 9266.31 6189.51 14369.37 00:25:55.403 =================================================================================================================== 00:25:55.403 Total : 1722.25 215.28 0.00 0.00 9266.31 6189.51 14369.37 00:25:55.403 0 00:25:55.403 07:06:01 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:25:55.403 07:06:01 -- host/digest.sh@92 -- # get_accel_stats 00:25:55.403 07:06:01 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:55.403 07:06:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:55.403 07:06:01 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:55.403 | select(.opcode=="crc32c") 00:25:55.403 | "\(.module_name) \(.executed)"' 00:25:55.403 07:06:02 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:25:55.403 07:06:02 -- host/digest.sh@93 -- # exp_module=software 00:25:55.403 07:06:02 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:25:55.403 07:06:02 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:55.403 07:06:02 -- host/digest.sh@97 -- # killprocess 3137655 00:25:55.403 07:06:02 -- common/autotest_common.sh@926 -- # '[' -z 3137655 ']' 00:25:55.403 07:06:02 -- common/autotest_common.sh@930 -- # kill -0 3137655 00:25:55.403 07:06:02 -- common/autotest_common.sh@931 -- # uname 00:25:55.403 07:06:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:55.403 07:06:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3137655 00:25:55.403 07:06:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:55.403 07:06:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:55.403 07:06:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3137655' 00:25:55.403 killing process with pid 3137655 00:25:55.403 07:06:02 -- common/autotest_common.sh@945 -- # kill 3137655 00:25:55.403 Received shutdown signal, test time was about 2.000000 seconds 00:25:55.403 00:25:55.403 Latency(us) 00:25:55.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.403 =================================================================================================================== 00:25:55.403 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:55.403 07:06:02 -- common/autotest_common.sh@950 -- # wait 3137655 00:25:55.403 07:06:02 -- host/digest.sh@126 -- # killprocess 3136115 00:25:55.403 07:06:02 -- common/autotest_common.sh@926 -- # '[' -z 3136115 ']' 00:25:55.403 07:06:02 -- common/autotest_common.sh@930 -- # kill -0 3136115 00:25:55.403 07:06:02 -- common/autotest_common.sh@931 -- # uname 00:25:55.403 07:06:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:55.403 07:06:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3136115 00:25:55.403 07:06:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:55.403 07:06:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:55.403 07:06:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3136115' 00:25:55.403 killing process with pid 3136115 00:25:55.403 07:06:02 -- common/autotest_common.sh@945 -- # kill 3136115 00:25:55.403 07:06:02 -- common/autotest_common.sh@950 -- # wait 3136115 00:25:55.661 00:25:55.661 real 0m16.082s 00:25:55.661 user 0m31.597s 00:25:55.661 sys 0m3.991s 00:25:55.661 07:06:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:55.661 07:06:02 -- common/autotest_common.sh@10 -- # set +x 00:25:55.661 ************************************ 00:25:55.661 END TEST nvmf_digest_clean 00:25:55.661 ************************************ 00:25:55.661 07:06:02 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:25:55.661 07:06:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:55.661 07:06:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:55.661 07:06:02 -- common/autotest_common.sh@10 -- # set +x 00:25:55.661 ************************************ 00:25:55.661 START TEST nvmf_digest_error 00:25:55.661 ************************************ 00:25:55.661 07:06:02 -- common/autotest_common.sh@1104 -- # run_digest_error 00:25:55.661 07:06:02 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:25:55.661 07:06:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:55.661 07:06:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:55.661 07:06:02 -- common/autotest_common.sh@10 -- # set +x 00:25:55.661 07:06:02 -- nvmf/common.sh@469 -- # nvmfpid=3138219 00:25:55.661 07:06:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:55.661 07:06:02 -- nvmf/common.sh@470 -- # waitforlisten 3138219 00:25:55.661 07:06:02 -- common/autotest_common.sh@819 -- # '[' -z 3138219 ']' 00:25:55.661 07:06:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.661 07:06:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:55.661 07:06:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.661 07:06:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:55.661 07:06:02 -- common/autotest_common.sh@10 -- # set +x 00:25:55.661 [2024-05-12 07:06:02.787897] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:55.661 [2024-05-12 07:06:02.787984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.920 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.920 [2024-05-12 07:06:02.852349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.920 [2024-05-12 07:06:02.961974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:55.920 [2024-05-12 07:06:02.962134] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.920 [2024-05-12 07:06:02.962151] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.920 [2024-05-12 07:06:02.962164] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.920 [2024-05-12 07:06:02.962190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.920 07:06:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:55.920 07:06:03 -- common/autotest_common.sh@852 -- # return 0 00:25:55.920 07:06:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:55.920 07:06:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:55.920 07:06:03 -- common/autotest_common.sh@10 -- # set +x 00:25:55.920 07:06:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.920 07:06:03 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:55.920 07:06:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.920 07:06:03 -- common/autotest_common.sh@10 -- # set +x 00:25:55.920 [2024-05-12 07:06:03.034770] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:55.920 07:06:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.920 07:06:03 -- host/digest.sh@104 -- # common_target_config 00:25:55.920 07:06:03 -- host/digest.sh@43 -- # rpc_cmd 00:25:55.920 07:06:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.920 07:06:03 -- common/autotest_common.sh@10 -- # set +x 00:25:56.178 null0 00:25:56.178 [2024-05-12 07:06:03.158058] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.178 [2024-05-12 07:06:03.182282] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.178 07:06:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.178 07:06:03 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:25:56.178 07:06:03 -- host/digest.sh@54 -- # local rw bs qd 00:25:56.178 07:06:03 -- host/digest.sh@56 -- # rw=randread 00:25:56.178 07:06:03 -- host/digest.sh@56 -- # bs=4096 00:25:56.178 07:06:03 -- host/digest.sh@56 -- # qd=128 00:25:56.178 07:06:03 -- host/digest.sh@58 -- # bperfpid=3138358 00:25:56.178 07:06:03 -- host/digest.sh@60 -- # waitforlisten 3138358 /var/tmp/bperf.sock 00:25:56.178 07:06:03 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:56.178 07:06:03 -- common/autotest_common.sh@819 -- # '[' -z 3138358 ']' 00:25:56.178 07:06:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:56.178 07:06:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:56.178 07:06:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:56.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:56.178 07:06:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:56.178 07:06:03 -- common/autotest_common.sh@10 -- # set +x 00:25:56.178 [2024-05-12 07:06:03.226512] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:56.178 [2024-05-12 07:06:03.226583] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3138358 ] 00:25:56.178 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.178 [2024-05-12 07:06:03.292206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.437 [2024-05-12 07:06:03.406942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.371 07:06:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:57.371 07:06:04 -- common/autotest_common.sh@852 -- # return 0 00:25:57.371 07:06:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:57.371 07:06:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:57.371 07:06:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:57.371 07:06:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.371 07:06:04 -- common/autotest_common.sh@10 -- # set +x 00:25:57.371 07:06:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.371 07:06:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.371 07:06:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.938 nvme0n1 00:25:57.938 07:06:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:57.938 07:06:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.938 07:06:04 -- common/autotest_common.sh@10 -- # set +x 00:25:57.938 07:06:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.938 07:06:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:57.938 07:06:04 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:57.938 Running I/O for 2 seconds... 00:25:57.938 [2024-05-12 07:06:04.962450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:57.938 [2024-05-12 07:06:04.962500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-05-12 07:06:04.962522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.938 [2024-05-12 07:06:04.980871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:57.938 [2024-05-12 07:06:04.980918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-05-12 07:06:04.980935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.938 [2024-05-12 07:06:04.999824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:57.938 [2024-05-12 07:06:04.999863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-05-12 07:06:04.999882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.938 [2024-05-12 07:06:05.016914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:57.938 [2024-05-12 07:06:05.016945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-05-12 07:06:05.016976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.938 [2024-05-12 07:06:05.031289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:57.938 [2024-05-12 07:06:05.031324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-05-12 07:06:05.031342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.938 [2024-05-12 07:06:05.043920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:57.938 [2024-05-12 07:06:05.043951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-05-12 07:06:05.043968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.938 [2024-05-12 07:06:05.058766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:57.938 [2024-05-12 07:06:05.058798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.938 [2024-05-12 07:06:05.058815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.074503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.074539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.074559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.086504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.086539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.086559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.103840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.103871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.103888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.119587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.119622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.119641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.135545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.135579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.135599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.150575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.150610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.150628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.161728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.161758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.161775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.175438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.175469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.175486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.187679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.187717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.187735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.200114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.200145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.200161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.213396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.213430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.213449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.225630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.225661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.225681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.238896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.238928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.238950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.251637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.251668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.251685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.264037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.264068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.264084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.276500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.276531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.276547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.289961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.289992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.290024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.302409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.302442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.302461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.199 [2024-05-12 07:06:05.314888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.199 [2024-05-12 07:06:05.314918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.199 [2024-05-12 07:06:05.314935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.328419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.328454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.328472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.341111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.341144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.341162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.353583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.353626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.353645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.365903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.365933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.365950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.379298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.379331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.379349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.391990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.392039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.392058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.404440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.404473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.404491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.416782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.416811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.416828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.430179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.430213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.430232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.442914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.442945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.442961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.455310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.455344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.455363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.467739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.467776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.467792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.481242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.481276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.481295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.493888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.493918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.493935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.506225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.506258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.506291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.518613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.518644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.518661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.532003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.532036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.532054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.544764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.544795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.544811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.557147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.557177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.557194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.569486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.569516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.569555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.460 [2024-05-12 07:06:05.582963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.460 [2024-05-12 07:06:05.582994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.460 [2024-05-12 07:06:05.583027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.595669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.595715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.595751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.608012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.608042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.608059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.620262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.620292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.620308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.633587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.633621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.633639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.646172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.646205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.646224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.658550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.658580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.658597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.671934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.671964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.671980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.684522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.684562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.684581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.697017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.697050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.697069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.709351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.709398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.709415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.722843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.722873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.722891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.735475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.735506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.735523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.747951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.747982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.747999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.760174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.760204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.760221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.773637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.773670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.773688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.786317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.786347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.786364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.720 [2024-05-12 07:06:05.798783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.720 [2024-05-12 07:06:05.798812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.720 [2024-05-12 07:06:05.798830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.721 [2024-05-12 07:06:05.811118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.721 [2024-05-12 07:06:05.811164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.721 [2024-05-12 07:06:05.811182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.721 [2024-05-12 07:06:05.824578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.721 [2024-05-12 07:06:05.824611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.721 [2024-05-12 07:06:05.824629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.721 [2024-05-12 07:06:05.837312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.721 [2024-05-12 07:06:05.837342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.721 [2024-05-12 07:06:05.837361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.979 [2024-05-12 07:06:05.849807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.979 [2024-05-12 07:06:05.849838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.979 [2024-05-12 07:06:05.849863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.979 [2024-05-12 07:06:05.862156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.979 [2024-05-12 07:06:05.862187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.979 [2024-05-12 07:06:05.862211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.979 [2024-05-12 07:06:05.875508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.979 [2024-05-12 07:06:05.875541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.979 [2024-05-12 07:06:05.875565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.979 [2024-05-12 07:06:05.888094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.979 [2024-05-12 07:06:05.888127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.979 [2024-05-12 07:06:05.888144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.979 [2024-05-12 07:06:05.900524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.979 [2024-05-12 07:06:05.900559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.979 [2024-05-12 07:06:05.900578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.979 [2024-05-12 07:06:05.912946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.979 [2024-05-12 07:06:05.912976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.979 [2024-05-12 07:06:05.912996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.979 [2024-05-12 07:06:05.926232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.979 [2024-05-12 07:06:05.926265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.979 [2024-05-12 07:06:05.926283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.979 [2024-05-12 07:06:05.938901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.979 [2024-05-12 07:06:05.938930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.979 [2024-05-12 07:06:05.938949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.979 [2024-05-12 07:06:05.951330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.979 [2024-05-12 07:06:05.951360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:05.951378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.980 [2024-05-12 07:06:05.963749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.980 [2024-05-12 07:06:05.963778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:05.963797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.980 [2024-05-12 07:06:05.977093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.980 [2024-05-12 07:06:05.977125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:05.977144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.980 [2024-05-12 07:06:05.989467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.980 [2024-05-12 07:06:05.989501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:05.989521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.980 [2024-05-12 07:06:06.001772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.980 [2024-05-12 07:06:06.001802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:06.001818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.980 [2024-05-12 07:06:06.014158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.980 [2024-05-12 07:06:06.014204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:06.014223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.980 [2024-05-12 07:06:06.027648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.980 [2024-05-12 07:06:06.027682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:06.027713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.980 [2024-05-12 07:06:06.040358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.980 [2024-05-12 07:06:06.040387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:06.040404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.980 [2024-05-12 07:06:06.052770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.980 [2024-05-12 07:06:06.052802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:06.052820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.980 [2024-05-12 07:06:06.066102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.980 [2024-05-12 07:06:06.066135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:06.066154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.980 [2024-05-12 07:06:06.078793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.980 [2024-05-12 07:06:06.078823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:06.078840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.980 [2024-05-12 07:06:06.091174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.980 [2024-05-12 07:06:06.091207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:06.091226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.980 [2024-05-12 07:06:06.103712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:58.980 [2024-05-12 07:06:06.103758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.980 [2024-05-12 07:06:06.103776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.117182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.117216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.117241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.129870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.129900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.129917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.142211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.142244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.142263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.154591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.154635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.154652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.168069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.168104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.168123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.180681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.180723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.180743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.193120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.193153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.193172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.205510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.205539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.205574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.218928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.218957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.218974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.231778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.231813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.231830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.244038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.244072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.244089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.256342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.256373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.256390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.269872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.269906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.269924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.282546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.282579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.282598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.295040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.295073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.295106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.307502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.307531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.307548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.320958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.321014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.321034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.333678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.333724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.333741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.346109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.346157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.346175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.239 [2024-05-12 07:06:06.358556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.239 [2024-05-12 07:06:06.358586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.239 [2024-05-12 07:06:06.358602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.499 [2024-05-12 07:06:06.371970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.499 [2024-05-12 07:06:06.372021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.499 [2024-05-12 07:06:06.372041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.499 [2024-05-12 07:06:06.384579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.384609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.384626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.397001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.397032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.397048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.409402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.409432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.409467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.422868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.422897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.422914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.435537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.435583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.435601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.447917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.447948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.447971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.461198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.461232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.461251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.473907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.473937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.473954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.486274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.486308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.486326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.498682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.498737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.498755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.512166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.512199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.512218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.524839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.524869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.524886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.537190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.537225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.537244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.549622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.549667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.549685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.563054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.563094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.563114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.575651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.575684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.575712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.588094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.588129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.588148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.600543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.600574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.600590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.613830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.613860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.613877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.500 [2024-05-12 07:06:06.626521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.500 [2024-05-12 07:06:06.626554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.500 [2024-05-12 07:06:06.626573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.639041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.639090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.639107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.651394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.651423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.651456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.664796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.664828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.664845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.677483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.677513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.677530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.689968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.689998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.690014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.702296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.702327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.702344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.715653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.715686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.715714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.728312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.728343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.728359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.740651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.740685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.740728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.752985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.753016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.753033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.766369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.766402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.766421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.779188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.779222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.779247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.761 [2024-05-12 07:06:06.791624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.761 [2024-05-12 07:06:06.791673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.761 [2024-05-12 07:06:06.791690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.762 [2024-05-12 07:06:06.803952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.762 [2024-05-12 07:06:06.803983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.762 [2024-05-12 07:06:06.804000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.762 [2024-05-12 07:06:06.817350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.762 [2024-05-12 07:06:06.817384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.762 [2024-05-12 07:06:06.817403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.762 [2024-05-12 07:06:06.830066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.762 [2024-05-12 07:06:06.830100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.762 [2024-05-12 07:06:06.830118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.762 [2024-05-12 07:06:06.842432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.762 [2024-05-12 07:06:06.842480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.762 [2024-05-12 07:06:06.842497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.762 [2024-05-12 07:06:06.855718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.762 [2024-05-12 07:06:06.855766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.762 [2024-05-12 07:06:06.855783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.762 [2024-05-12 07:06:06.868365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.762 [2024-05-12 07:06:06.868398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.762 [2024-05-12 07:06:06.868417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.762 [2024-05-12 07:06:06.880781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:25:59.762 [2024-05-12 07:06:06.880811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.762 [2024-05-12 07:06:06.880828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.020 [2024-05-12 07:06:06.893110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:26:00.020 [2024-05-12 07:06:06.893158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.020 [2024-05-12 07:06:06.893175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.020 [2024-05-12 07:06:06.906483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:26:00.020 [2024-05-12 07:06:06.906517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.020 [2024-05-12 07:06:06.906536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.020 [2024-05-12 07:06:06.919008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:26:00.020 [2024-05-12 07:06:06.919042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.020 [2024-05-12 07:06:06.919061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.021 [2024-05-12 07:06:06.931323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:26:00.021 [2024-05-12 07:06:06.931354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.021 [2024-05-12 07:06:06.931371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.021 [2024-05-12 07:06:06.943710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x93df00) 00:26:00.021 [2024-05-12 07:06:06.943751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.021 [2024-05-12 07:06:06.943768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.021 00:26:00.021 Latency(us) 00:26:00.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.021 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:00.021 nvme0n1 : 2.00 19649.10 76.75 0.00 0.00 6507.00 2694.26 19515.16 00:26:00.021 =================================================================================================================== 00:26:00.021 Total : 19649.10 76.75 0.00 0.00 6507.00 2694.26 19515.16 00:26:00.021 0 00:26:00.021 07:06:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:00.021 07:06:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:00.021 07:06:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:00.021 | .driver_specific 00:26:00.021 | .nvme_error 00:26:00.021 | .status_code 00:26:00.021 | .command_transient_transport_error' 00:26:00.021 07:06:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:00.279 07:06:07 -- host/digest.sh@71 -- # (( 154 > 0 )) 00:26:00.279 07:06:07 -- host/digest.sh@73 -- # killprocess 3138358 00:26:00.279 07:06:07 -- common/autotest_common.sh@926 -- # '[' -z 3138358 ']' 00:26:00.279 07:06:07 -- common/autotest_common.sh@930 -- # kill -0 3138358 00:26:00.279 07:06:07 -- common/autotest_common.sh@931 -- # uname 00:26:00.279 07:06:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:00.279 07:06:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3138358 00:26:00.279 07:06:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:00.279 07:06:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:00.279 07:06:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3138358' 00:26:00.279 killing process with pid 3138358 00:26:00.279 07:06:07 -- common/autotest_common.sh@945 -- # kill 3138358 00:26:00.279 Received shutdown signal, test time was about 2.000000 seconds 00:26:00.279 00:26:00.279 Latency(us) 00:26:00.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.279 =================================================================================================================== 00:26:00.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:00.279 07:06:07 -- common/autotest_common.sh@950 -- # wait 3138358 00:26:00.538 07:06:07 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:26:00.538 07:06:07 -- host/digest.sh@54 -- # local rw bs qd 00:26:00.538 07:06:07 -- host/digest.sh@56 -- # rw=randread 00:26:00.538 07:06:07 -- host/digest.sh@56 -- # bs=131072 00:26:00.538 07:06:07 -- host/digest.sh@56 -- # qd=16 00:26:00.538 07:06:07 -- host/digest.sh@58 -- # bperfpid=3138946 00:26:00.538 07:06:07 -- host/digest.sh@60 -- # waitforlisten 3138946 /var/tmp/bperf.sock 00:26:00.538 07:06:07 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:00.538 07:06:07 -- common/autotest_common.sh@819 -- # '[' -z 3138946 ']' 00:26:00.538 07:06:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:00.538 07:06:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:00.538 07:06:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:00.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:00.538 07:06:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:00.538 07:06:07 -- common/autotest_common.sh@10 -- # set +x 00:26:00.538 [2024-05-12 07:06:07.532017] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:00.538 [2024-05-12 07:06:07.532117] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3138946 ] 00:26:00.538 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:00.538 Zero copy mechanism will not be used. 00:26:00.538 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.538 [2024-05-12 07:06:07.596060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.796 [2024-05-12 07:06:07.716006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.362 07:06:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:01.362 07:06:08 -- common/autotest_common.sh@852 -- # return 0 00:26:01.362 07:06:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:01.362 07:06:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:01.621 07:06:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:01.621 07:06:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.621 07:06:08 -- common/autotest_common.sh@10 -- # set +x 00:26:01.621 07:06:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.621 07:06:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.621 07:06:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.186 nvme0n1 00:26:02.186 07:06:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:02.186 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:02.186 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:26:02.186 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:02.186 07:06:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:02.187 07:06:09 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:02.446 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:02.446 Zero copy mechanism will not be used. 00:26:02.446 Running I/O for 2 seconds... 00:26:02.446 [2024-05-12 07:06:09.355876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.355931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.355953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.368224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.368261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.368280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.380325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.380360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.380384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.392516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.392551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.392569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.404783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.404814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.404839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.417107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.417140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.417159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.429621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.429656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.429675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.441511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.441541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.441557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.453328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.453365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.453381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.465146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.465176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.465193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.477061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.477109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.477127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.488941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.488997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.489014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.501038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.501086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.446 [2024-05-12 07:06:09.501104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.446 [2024-05-12 07:06:09.513098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.446 [2024-05-12 07:06:09.513132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.447 [2024-05-12 07:06:09.513151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.447 [2024-05-12 07:06:09.524797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.447 [2024-05-12 07:06:09.524827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.447 [2024-05-12 07:06:09.524843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.447 [2024-05-12 07:06:09.536690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.447 [2024-05-12 07:06:09.536732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.447 [2024-05-12 07:06:09.536766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.447 [2024-05-12 07:06:09.548579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.447 [2024-05-12 07:06:09.548612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.447 [2024-05-12 07:06:09.548630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.447 [2024-05-12 07:06:09.560496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.447 [2024-05-12 07:06:09.560530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.447 [2024-05-12 07:06:09.560549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.447 [2024-05-12 07:06:09.572560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.447 [2024-05-12 07:06:09.572591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.447 [2024-05-12 07:06:09.572608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.706 [2024-05-12 07:06:09.584379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.706 [2024-05-12 07:06:09.584411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.706 [2024-05-12 07:06:09.584427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.706 [2024-05-12 07:06:09.596081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.706 [2024-05-12 07:06:09.596116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.706 [2024-05-12 07:06:09.596135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.706 [2024-05-12 07:06:09.607852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.706 [2024-05-12 07:06:09.607883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.706 [2024-05-12 07:06:09.607899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.706 [2024-05-12 07:06:09.619893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.706 [2024-05-12 07:06:09.619923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.619940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.631820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.631851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.631868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.643781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.643813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.643830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.655788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.655820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.655845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.667675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.667719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.667739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.679647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.679681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.679710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.691515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.691548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.691566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.703330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.703364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.703382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.715812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.715856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.715871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.728320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.728353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.728372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.740639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.740675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.740694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.752503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.752537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.752555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.764325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.764359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.764378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.776028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.776072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.776091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.787919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.787950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.787967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.799895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.799924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.799940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.811637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.811665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.811704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.707 [2024-05-12 07:06:09.823410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.707 [2024-05-12 07:06:09.823444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.707 [2024-05-12 07:06:09.823463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.835236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.835272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.835291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.847523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.847557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.847577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.860123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.860157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.860184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.872849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.872880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.872897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.884945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.884991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.885010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.897081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.897114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.897132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.909558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.909592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.909611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.921785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.921814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.921830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.934201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.934235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.934254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.946533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.946566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.946584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.958902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.958933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.958949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.971127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.971169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.971189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.983442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.983475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.966 [2024-05-12 07:06:09.983493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.966 [2024-05-12 07:06:09.995922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.966 [2024-05-12 07:06:09.995953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.967 [2024-05-12 07:06:09.995970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.967 [2024-05-12 07:06:10.007952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.967 [2024-05-12 07:06:10.008001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.967 [2024-05-12 07:06:10.008021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.967 [2024-05-12 07:06:10.020086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.967 [2024-05-12 07:06:10.020120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.967 [2024-05-12 07:06:10.020139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.967 [2024-05-12 07:06:10.032208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.967 [2024-05-12 07:06:10.032249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.967 [2024-05-12 07:06:10.032276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.967 [2024-05-12 07:06:10.045611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.967 [2024-05-12 07:06:10.045655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.967 [2024-05-12 07:06:10.045680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.967 [2024-05-12 07:06:10.058171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.967 [2024-05-12 07:06:10.058206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.967 [2024-05-12 07:06:10.058225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.967 [2024-05-12 07:06:10.070484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.967 [2024-05-12 07:06:10.070518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.967 [2024-05-12 07:06:10.070537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.967 [2024-05-12 07:06:10.082738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:02.967 [2024-05-12 07:06:10.082785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.967 [2024-05-12 07:06:10.082803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.226 [2024-05-12 07:06:10.094993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.226 [2024-05-12 07:06:10.095025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.226 [2024-05-12 07:06:10.095042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.226 [2024-05-12 07:06:10.107428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.226 [2024-05-12 07:06:10.107468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.226 [2024-05-12 07:06:10.107487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.226 [2024-05-12 07:06:10.119827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.226 [2024-05-12 07:06:10.119857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.226 [2024-05-12 07:06:10.119874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.226 [2024-05-12 07:06:10.132340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.226 [2024-05-12 07:06:10.132375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.226 [2024-05-12 07:06:10.132394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.226 [2024-05-12 07:06:10.144465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.226 [2024-05-12 07:06:10.144498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.226 [2024-05-12 07:06:10.144517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.226 [2024-05-12 07:06:10.156319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.226 [2024-05-12 07:06:10.156353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.156371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.168445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.168478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.168496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.180721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.180767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.180792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.193118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.193151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.193170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.205001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.205031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.205064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.217403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.217437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.217456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.229669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.229712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.229733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.241895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.241925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.241942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.254291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.254324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.254343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.266780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.266811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.266827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.278843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.278873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.278890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.291081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.291116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.291135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.303625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.303659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.303678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.316060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.316093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.316112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.328212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.328245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.328263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.340421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.340455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.340474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.227 [2024-05-12 07:06:10.352800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.227 [2024-05-12 07:06:10.352829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.227 [2024-05-12 07:06:10.352845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.365255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.365290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.365309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.377675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.377715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.377736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.390124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.390158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.390183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.402397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.402430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.402449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.414411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.414444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.414463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.426747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.426791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.426807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.439369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.439402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.439420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.451795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.451823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.451840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.464312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.464346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.464365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.476598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.476626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.476657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.488630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.488663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.488682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.501109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.501149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.501169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.513287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.513321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.513341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.525587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.525622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.525640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.537889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.537920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.537937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.550376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.550409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.550428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.562592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.562625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.488 [2024-05-12 07:06:10.562644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.488 [2024-05-12 07:06:10.575128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.488 [2024-05-12 07:06:10.575161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.489 [2024-05-12 07:06:10.575179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.489 [2024-05-12 07:06:10.587774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.489 [2024-05-12 07:06:10.587804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.489 [2024-05-12 07:06:10.587821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.489 [2024-05-12 07:06:10.600064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.489 [2024-05-12 07:06:10.600098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.489 [2024-05-12 07:06:10.600116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.489 [2024-05-12 07:06:10.612528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.489 [2024-05-12 07:06:10.612560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.489 [2024-05-12 07:06:10.612579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.624660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.624690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.624713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.636771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.636800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.636817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.649520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.649555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.649574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.661862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.661892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.661909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.674374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.674407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.674426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.686737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.686784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.686800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.699118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.699150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.699169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.711512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.711547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.711572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.723713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.723747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.723779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.736043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.736077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.736096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.748509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.748542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.748562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.760759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.760788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.760804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.774079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.774114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.774133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.786342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.786371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.786391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.799178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.799211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.799229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.812024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.812078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.812093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.824836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.824866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.824883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.837775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.837806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.837823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.850545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.850578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.850598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.862561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.862594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.862613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.748 [2024-05-12 07:06:10.875383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:03.748 [2024-05-12 07:06:10.875417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.748 [2024-05-12 07:06:10.875435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:10.887795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:10.887826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:10.887844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:10.900583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:10.900618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:10.900637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:10.912980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:10.913025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:10.913048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:10.925432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:10.925465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:10.925490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:10.937370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:10.937398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:10.937414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:10.949527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:10.949560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:10.949578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:10.961824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:10.961853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:10.961869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:10.974294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:10.974328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:10.974347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:10.986787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:10.986831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:10.986846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:10.999081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:10.999115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:10.999133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:11.010938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:11.010966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:11.010982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:11.023224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:11.023259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:11.023277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:11.035589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:11.035628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:11.035648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:11.047872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:11.047902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:11.047918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:11.060986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:11.061034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:11.061053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:11.073220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:11.073254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:11.073272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:11.085118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:11.085151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:11.085169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:11.097482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:11.097515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:11.097534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:11.109657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:11.109691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:11.109721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:11.121857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:11.121886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:11.121902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.009 [2024-05-12 07:06:11.134068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.009 [2024-05-12 07:06:11.134113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.009 [2024-05-12 07:06:11.134138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.268 [2024-05-12 07:06:11.146300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.268 [2024-05-12 07:06:11.146335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.146354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.158757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.158791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.158810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.171244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.171278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.171296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.183366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.183400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.183418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.195731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.195776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.195792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.207878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.207908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.207924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.219756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.219786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.219803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.231655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.231684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.231706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.243573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.243613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.243633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.255581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.255610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.255625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.267481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.267524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.267540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.279468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.279500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.279518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.291913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.291943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.291958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.304307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.304342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.304361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.316675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.316717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.316736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.328661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.328690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.328715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.269 [2024-05-12 07:06:11.340650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a1e880) 00:26:04.269 [2024-05-12 07:06:11.340691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.269 [2024-05-12 07:06:11.340736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.269 00:26:04.269 Latency(us) 00:26:04.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.269 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:04.269 nvme0n1 : 2.00 2532.06 316.51 0.00 0.00 6315.12 5728.33 13398.47 00:26:04.269 =================================================================================================================== 00:26:04.269 Total : 2532.06 316.51 0.00 0.00 6315.12 5728.33 13398.47 00:26:04.269 0 00:26:04.269 07:06:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:04.269 07:06:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:04.269 07:06:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:04.269 07:06:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:04.269 | .driver_specific 00:26:04.269 | .nvme_error 00:26:04.269 | .status_code 00:26:04.269 | .command_transient_transport_error' 00:26:04.528 07:06:11 -- host/digest.sh@71 -- # (( 163 > 0 )) 00:26:04.528 07:06:11 -- host/digest.sh@73 -- # killprocess 3138946 00:26:04.528 07:06:11 -- common/autotest_common.sh@926 -- # '[' -z 3138946 ']' 00:26:04.528 07:06:11 -- common/autotest_common.sh@930 -- # kill -0 3138946 00:26:04.528 07:06:11 -- common/autotest_common.sh@931 -- # uname 00:26:04.528 07:06:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:04.528 07:06:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3138946 00:26:04.528 07:06:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:04.528 07:06:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:04.528 07:06:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3138946' 00:26:04.528 killing process with pid 3138946 00:26:04.528 07:06:11 -- common/autotest_common.sh@945 -- # kill 3138946 00:26:04.528 Received shutdown signal, test time was about 2.000000 seconds 00:26:04.528 00:26:04.528 Latency(us) 00:26:04.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.528 =================================================================================================================== 00:26:04.528 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.528 07:06:11 -- common/autotest_common.sh@950 -- # wait 3138946 00:26:04.787 07:06:11 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:26:04.787 07:06:11 -- host/digest.sh@54 -- # local rw bs qd 00:26:04.787 07:06:11 -- host/digest.sh@56 -- # rw=randwrite 00:26:04.787 07:06:11 -- host/digest.sh@56 -- # bs=4096 00:26:04.787 07:06:11 -- host/digest.sh@56 -- # qd=128 00:26:04.787 07:06:11 -- host/digest.sh@58 -- # bperfpid=3139849 00:26:04.787 07:06:11 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:04.787 07:06:11 -- host/digest.sh@60 -- # waitforlisten 3139849 /var/tmp/bperf.sock 00:26:04.787 07:06:11 -- common/autotest_common.sh@819 -- # '[' -z 3139849 ']' 00:26:04.787 07:06:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:04.787 07:06:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:04.787 07:06:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:04.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:04.787 07:06:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:04.787 07:06:11 -- common/autotest_common.sh@10 -- # set +x 00:26:04.787 [2024-05-12 07:06:11.913224] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:04.787 [2024-05-12 07:06:11.913320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139849 ] 00:26:05.047 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.047 [2024-05-12 07:06:11.983747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.047 [2024-05-12 07:06:12.101558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.984 07:06:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:05.984 07:06:12 -- common/autotest_common.sh@852 -- # return 0 00:26:05.984 07:06:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:05.984 07:06:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:06.244 07:06:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:06.244 07:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.244 07:06:13 -- common/autotest_common.sh@10 -- # set +x 00:26:06.244 07:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.244 07:06:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.244 07:06:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.504 nvme0n1 00:26:06.504 07:06:13 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:06.504 07:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.504 07:06:13 -- common/autotest_common.sh@10 -- # set +x 00:26:06.504 07:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.504 07:06:13 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:06.504 07:06:13 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:06.504 Running I/O for 2 seconds... 00:26:06.504 [2024-05-12 07:06:13.573375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e7c50 00:26:06.504 [2024-05-12 07:06:13.574675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.504 [2024-05-12 07:06:13.574745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:06.504 [2024-05-12 07:06:13.586098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e7c50 00:26:06.504 [2024-05-12 07:06:13.587413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.504 [2024-05-12 07:06:13.587449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:06.504 [2024-05-12 07:06:13.599004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e7c50 00:26:06.504 [2024-05-12 07:06:13.600331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.504 [2024-05-12 07:06:13.600367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:06.504 [2024-05-12 07:06:13.611648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e7c50 00:26:06.504 [2024-05-12 07:06:13.612963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.504 [2024-05-12 07:06:13.612994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:06.504 [2024-05-12 07:06:13.624293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e7c50 00:26:06.504 [2024-05-12 07:06:13.625612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.504 [2024-05-12 07:06:13.625646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.767 [2024-05-12 07:06:13.636894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e7c50 00:26:06.767 [2024-05-12 07:06:13.638242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.767 [2024-05-12 07:06:13.638278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.767 [2024-05-12 07:06:13.649463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e7c50 00:26:06.767 [2024-05-12 07:06:13.650836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.767 [2024-05-12 07:06:13.650865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:06.767 [2024-05-12 07:06:13.661949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e7c50 00:26:06.767 [2024-05-12 07:06:13.663341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.767 [2024-05-12 07:06:13.663375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.674513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e7c50 00:26:06.768 [2024-05-12 07:06:13.675941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.675969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.686979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e7c50 00:26:06.768 [2024-05-12 07:06:13.688372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.688406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.699484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e3060 00:26:06.768 [2024-05-12 07:06:13.700887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.700917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.711927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e5a90 00:26:06.768 [2024-05-12 07:06:13.713341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.713375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.724395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190ec408 00:26:06.768 [2024-05-12 07:06:13.725836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.725865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.736847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e73e0 00:26:06.768 [2024-05-12 07:06:13.738273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.738314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.749325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e99d8 00:26:06.768 [2024-05-12 07:06:13.750812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.750841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.761820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190f0ff8 00:26:06.768 [2024-05-12 07:06:13.763284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.763318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.774335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190f31b8 00:26:06.768 [2024-05-12 07:06:13.775668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.775708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.786722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e88f8 00:26:06.768 [2024-05-12 07:06:13.787982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.788027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.799304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e88f8 00:26:06.768 [2024-05-12 07:06:13.800563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.800597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.811772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e88f8 00:26:06.768 [2024-05-12 07:06:13.813068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.813101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.824197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e88f8 00:26:06.768 [2024-05-12 07:06:13.825516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.825550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.836881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e88f8 00:26:06.768 [2024-05-12 07:06:13.838218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.838252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.849322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e88f8 00:26:06.768 [2024-05-12 07:06:13.850657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.850708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.861791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e88f8 00:26:06.768 [2024-05-12 07:06:13.863190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.863224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.874311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e88f8 00:26:06.768 [2024-05-12 07:06:13.875749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.875778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:06.768 [2024-05-12 07:06:13.886784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e88f8 00:26:06.768 [2024-05-12 07:06:13.888287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.768 [2024-05-12 07:06:13.888321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:13.899227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e5ec8 00:26:07.029 [2024-05-12 07:06:13.900946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:13.900977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:13.911639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e88f8 00:26:07.029 [2024-05-12 07:06:13.913143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:13.913180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:13.924132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190ed0b0 00:26:07.029 [2024-05-12 07:06:13.925523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:13.925557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:13.936419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e73e0 00:26:07.029 [2024-05-12 07:06:13.938239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:13.938274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:13.948813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190efae0 00:26:07.029 [2024-05-12 07:06:13.950231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:13.950264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:13.961205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190e5a90 00:26:07.029 [2024-05-12 07:06:13.962607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:13.962641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:13.972797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190f6020 00:26:07.029 [2024-05-12 07:06:13.973019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:13.973052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:13.986480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.029 [2024-05-12 07:06:13.986957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:13.987004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:13.999905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.029 [2024-05-12 07:06:14.000276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:14.000308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:14.013484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.029 [2024-05-12 07:06:14.013850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:14.013879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:14.027157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.029 [2024-05-12 07:06:14.027519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:14.027552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:14.040669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.029 [2024-05-12 07:06:14.041106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:14.041138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:14.054112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.029 [2024-05-12 07:06:14.054478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.029 [2024-05-12 07:06:14.054510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.029 [2024-05-12 07:06:14.067482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.030 [2024-05-12 07:06:14.067845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.030 [2024-05-12 07:06:14.067874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.030 [2024-05-12 07:06:14.080923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.030 [2024-05-12 07:06:14.081293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.030 [2024-05-12 07:06:14.081326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.030 [2024-05-12 07:06:14.094354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.030 [2024-05-12 07:06:14.094719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.030 [2024-05-12 07:06:14.094762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.030 [2024-05-12 07:06:14.107798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.030 [2024-05-12 07:06:14.108121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.030 [2024-05-12 07:06:14.108155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.030 [2024-05-12 07:06:14.121160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.030 [2024-05-12 07:06:14.121483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.030 [2024-05-12 07:06:14.121515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.030 [2024-05-12 07:06:14.134631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.030 [2024-05-12 07:06:14.135022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.030 [2024-05-12 07:06:14.135054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.030 [2024-05-12 07:06:14.148094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.030 [2024-05-12 07:06:14.148457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.030 [2024-05-12 07:06:14.148488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.289 [2024-05-12 07:06:14.161655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.289 [2024-05-12 07:06:14.161998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.289 [2024-05-12 07:06:14.162055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.289 [2024-05-12 07:06:14.175273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.175632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.175664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.188788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.189105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.189143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.202309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.202666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.202706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.215991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.216358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.216390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.229581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.229921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.229958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.243232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.243589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.243620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.256684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.257121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.257152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.270167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.270523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.270554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.283533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.283891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.283919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.296563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.296894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.296922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.309506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.309876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.309904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.322875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.323266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.323298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.336437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.336830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.336858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.349869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.350244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.350276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.363292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.363704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.363753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.376819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.377245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.377277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.390370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.390747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.390794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.403938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.404311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.404344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.290 [2024-05-12 07:06:14.417537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.290 [2024-05-12 07:06:14.417863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.290 [2024-05-12 07:06:14.417895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.551 [2024-05-12 07:06:14.431134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.551 [2024-05-12 07:06:14.431471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.551 [2024-05-12 07:06:14.431505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.551 [2024-05-12 07:06:14.444621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.551 [2024-05-12 07:06:14.445074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.551 [2024-05-12 07:06:14.445107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.551 [2024-05-12 07:06:14.458238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.551 [2024-05-12 07:06:14.458568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.551 [2024-05-12 07:06:14.458600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.551 [2024-05-12 07:06:14.471769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.551 [2024-05-12 07:06:14.472089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.551 [2024-05-12 07:06:14.472121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.551 [2024-05-12 07:06:14.485239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.551 [2024-05-12 07:06:14.485566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.551 [2024-05-12 07:06:14.485598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.551 [2024-05-12 07:06:14.498854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.551 [2024-05-12 07:06:14.499236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.551 [2024-05-12 07:06:14.499268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.551 [2024-05-12 07:06:14.512321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.551 [2024-05-12 07:06:14.512645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.551 [2024-05-12 07:06:14.512677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.551 [2024-05-12 07:06:14.525816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.551 [2024-05-12 07:06:14.526141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.551 [2024-05-12 07:06:14.526173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.552 [2024-05-12 07:06:14.539288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.552 [2024-05-12 07:06:14.539616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.552 [2024-05-12 07:06:14.539655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.552 [2024-05-12 07:06:14.552867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.552 [2024-05-12 07:06:14.553198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.552 [2024-05-12 07:06:14.553230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.552 [2024-05-12 07:06:14.566355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.552 [2024-05-12 07:06:14.566725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.552 [2024-05-12 07:06:14.566768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.552 [2024-05-12 07:06:14.579883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.552 [2024-05-12 07:06:14.580278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.552 [2024-05-12 07:06:14.580308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.552 [2024-05-12 07:06:14.593432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.552 [2024-05-12 07:06:14.593805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.552 [2024-05-12 07:06:14.593836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.552 [2024-05-12 07:06:14.606824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.552 [2024-05-12 07:06:14.607189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.552 [2024-05-12 07:06:14.607221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.552 [2024-05-12 07:06:14.620242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.552 [2024-05-12 07:06:14.620598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.552 [2024-05-12 07:06:14.620630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.552 [2024-05-12 07:06:14.633864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.552 [2024-05-12 07:06:14.634233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.552 [2024-05-12 07:06:14.634265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.552 [2024-05-12 07:06:14.647417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.552 [2024-05-12 07:06:14.647805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.552 [2024-05-12 07:06:14.647835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.552 [2024-05-12 07:06:14.660943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.552 [2024-05-12 07:06:14.661312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.552 [2024-05-12 07:06:14.661344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.552 [2024-05-12 07:06:14.674530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.552 [2024-05-12 07:06:14.674894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.552 [2024-05-12 07:06:14.674923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.688012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.688391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.688424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.701604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.701962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.701991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.715208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.715534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.715566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.728801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.729204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.729238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.742344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.742715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.742762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.755894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.756223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.756256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.769372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.769757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.769786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.782888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.783222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.783255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.796428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.796776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.796805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.809927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.810290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.810322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.823465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.823808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.823838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.837010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.837383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.837414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.850711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.851129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.851162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.864103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.864462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.864494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.877515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.877877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.877906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.813 [2024-05-12 07:06:14.891053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.813 [2024-05-12 07:06:14.891411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.813 [2024-05-12 07:06:14.891449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.814 [2024-05-12 07:06:14.904599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.814 [2024-05-12 07:06:14.904952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.814 [2024-05-12 07:06:14.904982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.814 [2024-05-12 07:06:14.918158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.814 [2024-05-12 07:06:14.918524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.814 [2024-05-12 07:06:14.918555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.814 [2024-05-12 07:06:14.931664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:07.814 [2024-05-12 07:06:14.932040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.814 [2024-05-12 07:06:14.932084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.072 [2024-05-12 07:06:14.945244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.072 [2024-05-12 07:06:14.945607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.072 [2024-05-12 07:06:14.945640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.072 [2024-05-12 07:06:14.958774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.072 [2024-05-12 07:06:14.959172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.072 [2024-05-12 07:06:14.959205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.072 [2024-05-12 07:06:14.972216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:14.972541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:14.972573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:14.985668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:14.986117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:14.986149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:14.999236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:14.999595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:14.999626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.012754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.013114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.013146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.026323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.026682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.026721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.039884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.040215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.040247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.053604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.054055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.054088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.067152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.067481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.067514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.080677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.081117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.081149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.094274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.094630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.094661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.107864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.108224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.108255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.121189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.121515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.121548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.134627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.135009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.135038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.148180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.148535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.148566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.161635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.162013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.162041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.175160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.175521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.175554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.073 [2024-05-12 07:06:15.188598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.073 [2024-05-12 07:06:15.188962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.073 [2024-05-12 07:06:15.189008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.202038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.202380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.202412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.215549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.215927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.215955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.229118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.229474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.229506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.242601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.242946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.242995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.256123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.256486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.256517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.269628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.270051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.270082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.283180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.283507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.283538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.296570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.296937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.296966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.310052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.310381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.310412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.323481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.323822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.323851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.336934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.337303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.337334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.350314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.350637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.350668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.363069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.363364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.363392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.375885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.376214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.376256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.389246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.389572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.389599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.402338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.402662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.402709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.415497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.415837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.415865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.428893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.429257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.429288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.442297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.442663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.442707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.332 [2024-05-12 07:06:15.455676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.332 [2024-05-12 07:06:15.456069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-05-12 07:06:15.456100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.592 [2024-05-12 07:06:15.469286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.592 [2024-05-12 07:06:15.469648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.592 [2024-05-12 07:06:15.469690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.592 [2024-05-12 07:06:15.482824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.592 [2024-05-12 07:06:15.483158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.592 [2024-05-12 07:06:15.483189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.592 [2024-05-12 07:06:15.496386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.592 [2024-05-12 07:06:15.496754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.592 [2024-05-12 07:06:15.496807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.592 [2024-05-12 07:06:15.509901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.592 [2024-05-12 07:06:15.510271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.592 [2024-05-12 07:06:15.510302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.592 [2024-05-12 07:06:15.523399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.592 [2024-05-12 07:06:15.523766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.592 [2024-05-12 07:06:15.523813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.592 [2024-05-12 07:06:15.536994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.592 [2024-05-12 07:06:15.537352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.592 [2024-05-12 07:06:15.537382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.592 [2024-05-12 07:06:15.550485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.592 [2024-05-12 07:06:15.550853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.592 [2024-05-12 07:06:15.550881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.592 [2024-05-12 07:06:15.564023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1520240) with pdu=0x2000190eb760 00:26:08.592 [2024-05-12 07:06:15.564356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.592 [2024-05-12 07:06:15.564388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.592 00:26:08.592 Latency(us) 00:26:08.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.592 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:08.592 nvme0n1 : 2.01 19219.16 75.07 0.00 0.00 6645.44 3325.35 13786.83 00:26:08.592 =================================================================================================================== 00:26:08.592 Total : 19219.16 75.07 0.00 0.00 6645.44 3325.35 13786.83 00:26:08.592 0 00:26:08.592 07:06:15 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:08.592 07:06:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:08.592 07:06:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:08.592 07:06:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:08.592 | .driver_specific 00:26:08.592 | .nvme_error 00:26:08.592 | .status_code 00:26:08.592 | .command_transient_transport_error' 00:26:08.851 07:06:15 -- host/digest.sh@71 -- # (( 151 > 0 )) 00:26:08.851 07:06:15 -- host/digest.sh@73 -- # killprocess 3139849 00:26:08.851 07:06:15 -- common/autotest_common.sh@926 -- # '[' -z 3139849 ']' 00:26:08.851 07:06:15 -- common/autotest_common.sh@930 -- # kill -0 3139849 00:26:08.851 07:06:15 -- common/autotest_common.sh@931 -- # uname 00:26:08.851 07:06:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:08.851 07:06:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3139849 00:26:08.851 07:06:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:08.851 07:06:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:08.851 07:06:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3139849' 00:26:08.851 killing process with pid 3139849 00:26:08.851 07:06:15 -- common/autotest_common.sh@945 -- # kill 3139849 00:26:08.851 Received shutdown signal, test time was about 2.000000 seconds 00:26:08.851 00:26:08.851 Latency(us) 00:26:08.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.851 =================================================================================================================== 00:26:08.851 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:08.851 07:06:15 -- common/autotest_common.sh@950 -- # wait 3139849 00:26:09.109 07:06:16 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:26:09.109 07:06:16 -- host/digest.sh@54 -- # local rw bs qd 00:26:09.109 07:06:16 -- host/digest.sh@56 -- # rw=randwrite 00:26:09.109 07:06:16 -- host/digest.sh@56 -- # bs=131072 00:26:09.109 07:06:16 -- host/digest.sh@56 -- # qd=16 00:26:09.109 07:06:16 -- host/digest.sh@58 -- # bperfpid=3140401 00:26:09.109 07:06:16 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:09.109 07:06:16 -- host/digest.sh@60 -- # waitforlisten 3140401 /var/tmp/bperf.sock 00:26:09.109 07:06:16 -- common/autotest_common.sh@819 -- # '[' -z 3140401 ']' 00:26:09.109 07:06:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.109 07:06:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:09.109 07:06:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.109 07:06:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:09.109 07:06:16 -- common/autotest_common.sh@10 -- # set +x 00:26:09.109 [2024-05-12 07:06:16.139527] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:09.109 [2024-05-12 07:06:16.139610] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3140401 ] 00:26:09.109 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.109 Zero copy mechanism will not be used. 00:26:09.109 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.109 [2024-05-12 07:06:16.201316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.366 [2024-05-12 07:06:16.312350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.302 07:06:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:10.302 07:06:17 -- common/autotest_common.sh@852 -- # return 0 00:26:10.302 07:06:17 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:10.302 07:06:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:10.302 07:06:17 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:10.302 07:06:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.302 07:06:17 -- common/autotest_common.sh@10 -- # set +x 00:26:10.302 07:06:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.302 07:06:17 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.302 07:06:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.869 nvme0n1 00:26:10.869 07:06:17 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:10.869 07:06:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.869 07:06:17 -- common/autotest_common.sh@10 -- # set +x 00:26:10.869 07:06:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.869 07:06:17 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:10.869 07:06:17 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:11.129 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:11.129 Zero copy mechanism will not be used. 00:26:11.129 Running I/O for 2 seconds... 00:26:11.129 [2024-05-12 07:06:18.038245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.129 [2024-05-12 07:06:18.038629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.129 [2024-05-12 07:06:18.038671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.129 [2024-05-12 07:06:18.057525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.129 [2024-05-12 07:06:18.057954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.129 [2024-05-12 07:06:18.057986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.129 [2024-05-12 07:06:18.076248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.129 [2024-05-12 07:06:18.076625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.129 [2024-05-12 07:06:18.076654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.129 [2024-05-12 07:06:18.094016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.129 [2024-05-12 07:06:18.094342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.129 [2024-05-12 07:06:18.094372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.129 [2024-05-12 07:06:18.112921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.129 [2024-05-12 07:06:18.113301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.129 [2024-05-12 07:06:18.113331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.129 [2024-05-12 07:06:18.131524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.129 [2024-05-12 07:06:18.131977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.129 [2024-05-12 07:06:18.132006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.129 [2024-05-12 07:06:18.150649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.129 [2024-05-12 07:06:18.151177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.129 [2024-05-12 07:06:18.151207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.129 [2024-05-12 07:06:18.168159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.129 [2024-05-12 07:06:18.168609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.129 [2024-05-12 07:06:18.168638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.129 [2024-05-12 07:06:18.185858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.129 [2024-05-12 07:06:18.186315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.129 [2024-05-12 07:06:18.186346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.129 [2024-05-12 07:06:18.203011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.129 [2024-05-12 07:06:18.203460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.129 [2024-05-12 07:06:18.203490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.129 [2024-05-12 07:06:18.222240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.129 [2024-05-12 07:06:18.222725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.129 [2024-05-12 07:06:18.222755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.129 [2024-05-12 07:06:18.241215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.129 [2024-05-12 07:06:18.241667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.129 [2024-05-12 07:06:18.241702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.260294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.260890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.260920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.278578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.279163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.279193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.297635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.298092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.298127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.316113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.316521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.316551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.333787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.334368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.334397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.351068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.351652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.351703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.370362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.370760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.370790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.389403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.389924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.389953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.407646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.408092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.408121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.426761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.427330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.427358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.444868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.445392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.445421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.464145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.464802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.464831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.482719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.483185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.483213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.390 [2024-05-12 07:06:18.500732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.390 [2024-05-12 07:06:18.501187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.390 [2024-05-12 07:06:18.501215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.649 [2024-05-12 07:06:18.520936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.649 [2024-05-12 07:06:18.521267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.649 [2024-05-12 07:06:18.521300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.539524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.539975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.540004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.559600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.560122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.560151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.579057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.579543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.579572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.597413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.597795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.597825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.614032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.614610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.614639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.633371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.633975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.634003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.654889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.655621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.655650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.674460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.674923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.674952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.694182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.694733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.694776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.713385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.713959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.713988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.730883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.731502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.731541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.750248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.750759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.750803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.650 [2024-05-12 07:06:18.770083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.650 [2024-05-12 07:06:18.770557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.650 [2024-05-12 07:06:18.770586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.908 [2024-05-12 07:06:18.790169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.908 [2024-05-12 07:06:18.790807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.908 [2024-05-12 07:06:18.790858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.908 [2024-05-12 07:06:18.809146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.908 [2024-05-12 07:06:18.809807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.908 [2024-05-12 07:06:18.809836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.908 [2024-05-12 07:06:18.825210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.908 [2024-05-12 07:06:18.825806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.908 [2024-05-12 07:06:18.825835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.908 [2024-05-12 07:06:18.844178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.908 [2024-05-12 07:06:18.844715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.908 [2024-05-12 07:06:18.844745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.908 [2024-05-12 07:06:18.862891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.908 [2024-05-12 07:06:18.863355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.908 [2024-05-12 07:06:18.863385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.908 [2024-05-12 07:06:18.881618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.908 [2024-05-12 07:06:18.882352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.908 [2024-05-12 07:06:18.882381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.908 [2024-05-12 07:06:18.901413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.908 [2024-05-12 07:06:18.901893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.908 [2024-05-12 07:06:18.901921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.909 [2024-05-12 07:06:18.920987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.909 [2024-05-12 07:06:18.921528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.909 [2024-05-12 07:06:18.921557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.909 [2024-05-12 07:06:18.940564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.909 [2024-05-12 07:06:18.941069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.909 [2024-05-12 07:06:18.941099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.909 [2024-05-12 07:06:18.960946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.909 [2024-05-12 07:06:18.961351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.909 [2024-05-12 07:06:18.961390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.909 [2024-05-12 07:06:18.979141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.909 [2024-05-12 07:06:18.979912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.909 [2024-05-12 07:06:18.979941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.909 [2024-05-12 07:06:18.999274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.909 [2024-05-12 07:06:18.999753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.909 [2024-05-12 07:06:18.999781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.909 [2024-05-12 07:06:19.018085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:11.909 [2024-05-12 07:06:19.018512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.909 [2024-05-12 07:06:19.018541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.165 [2024-05-12 07:06:19.037608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.165 [2024-05-12 07:06:19.037989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.165 [2024-05-12 07:06:19.038019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.165 [2024-05-12 07:06:19.055824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.165 [2024-05-12 07:06:19.056260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.165 [2024-05-12 07:06:19.056289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.165 [2024-05-12 07:06:19.073927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.165 [2024-05-12 07:06:19.074301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.165 [2024-05-12 07:06:19.074330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.166 [2024-05-12 07:06:19.092954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.166 [2024-05-12 07:06:19.093468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.166 [2024-05-12 07:06:19.093497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.166 [2024-05-12 07:06:19.112232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.166 [2024-05-12 07:06:19.112873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.166 [2024-05-12 07:06:19.112902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.166 [2024-05-12 07:06:19.130510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.166 [2024-05-12 07:06:19.130980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.166 [2024-05-12 07:06:19.131009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.166 [2024-05-12 07:06:19.149658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.166 [2024-05-12 07:06:19.150055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.166 [2024-05-12 07:06:19.150084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.166 [2024-05-12 07:06:19.168568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.166 [2024-05-12 07:06:19.169030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.166 [2024-05-12 07:06:19.169059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.166 [2024-05-12 07:06:19.187621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.166 [2024-05-12 07:06:19.188006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.166 [2024-05-12 07:06:19.188035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.166 [2024-05-12 07:06:19.206274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.166 [2024-05-12 07:06:19.206807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.166 [2024-05-12 07:06:19.206835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.166 [2024-05-12 07:06:19.225525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.166 [2024-05-12 07:06:19.226057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.166 [2024-05-12 07:06:19.226086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.166 [2024-05-12 07:06:19.244094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.166 [2024-05-12 07:06:19.244643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.166 [2024-05-12 07:06:19.244671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.166 [2024-05-12 07:06:19.263129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.166 [2024-05-12 07:06:19.263721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.166 [2024-05-12 07:06:19.263750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.166 [2024-05-12 07:06:19.281754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.166 [2024-05-12 07:06:19.282239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.166 [2024-05-12 07:06:19.282272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.423 [2024-05-12 07:06:19.301390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.423 [2024-05-12 07:06:19.301790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.423 [2024-05-12 07:06:19.301818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.423 [2024-05-12 07:06:19.320660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.423 [2024-05-12 07:06:19.321286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.423 [2024-05-12 07:06:19.321329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.423 [2024-05-12 07:06:19.339682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.423 [2024-05-12 07:06:19.340119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.423 [2024-05-12 07:06:19.340147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.423 [2024-05-12 07:06:19.358134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.424 [2024-05-12 07:06:19.358728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.424 [2024-05-12 07:06:19.358757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.424 [2024-05-12 07:06:19.377253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.424 [2024-05-12 07:06:19.377737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.424 [2024-05-12 07:06:19.377766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.424 [2024-05-12 07:06:19.396008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.424 [2024-05-12 07:06:19.396403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.424 [2024-05-12 07:06:19.396431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.424 [2024-05-12 07:06:19.415251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.424 [2024-05-12 07:06:19.415979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.424 [2024-05-12 07:06:19.416007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.424 [2024-05-12 07:06:19.435188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.424 [2024-05-12 07:06:19.435569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.424 [2024-05-12 07:06:19.435598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.424 [2024-05-12 07:06:19.453489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.424 [2024-05-12 07:06:19.454137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.424 [2024-05-12 07:06:19.454166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.424 [2024-05-12 07:06:19.473517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.424 [2024-05-12 07:06:19.474035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.424 [2024-05-12 07:06:19.474063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.424 [2024-05-12 07:06:19.492669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.424 [2024-05-12 07:06:19.493200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.424 [2024-05-12 07:06:19.493228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.424 [2024-05-12 07:06:19.512299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.424 [2024-05-12 07:06:19.512768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.424 [2024-05-12 07:06:19.512796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.424 [2024-05-12 07:06:19.532068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.424 [2024-05-12 07:06:19.532523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.424 [2024-05-12 07:06:19.532552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.424 [2024-05-12 07:06:19.551038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.424 [2024-05-12 07:06:19.551492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.424 [2024-05-12 07:06:19.551521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.568947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.569611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.569640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.588292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.588690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.588725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.608273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.609110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.609148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.627937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.628323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.628351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.646750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.647260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.647289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.665742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.666187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.666217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.685162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.685790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.685818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.704686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.705227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.705256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.724045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.724634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.724663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.744182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.744682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.744718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.763669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.764231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.764260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.781444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.781906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.781935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.682 [2024-05-12 07:06:19.800291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.682 [2024-05-12 07:06:19.800723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.682 [2024-05-12 07:06:19.800752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.941 [2024-05-12 07:06:19.819331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.941 [2024-05-12 07:06:19.819713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.941 [2024-05-12 07:06:19.819742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.941 [2024-05-12 07:06:19.839208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.941 [2024-05-12 07:06:19.839873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.941 [2024-05-12 07:06:19.839902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.941 [2024-05-12 07:06:19.855462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.941 [2024-05-12 07:06:19.855841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.941 [2024-05-12 07:06:19.855869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.941 [2024-05-12 07:06:19.872028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.941 [2024-05-12 07:06:19.872489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.941 [2024-05-12 07:06:19.872518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.941 [2024-05-12 07:06:19.890535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.941 [2024-05-12 07:06:19.891190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.941 [2024-05-12 07:06:19.891219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.941 [2024-05-12 07:06:19.909516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.941 [2024-05-12 07:06:19.910127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.941 [2024-05-12 07:06:19.910156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.941 [2024-05-12 07:06:19.927511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.941 [2024-05-12 07:06:19.928109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.941 [2024-05-12 07:06:19.928137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.941 [2024-05-12 07:06:19.946199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.941 [2024-05-12 07:06:19.946604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.941 [2024-05-12 07:06:19.946633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.941 [2024-05-12 07:06:19.964486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.941 [2024-05-12 07:06:19.965299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.941 [2024-05-12 07:06:19.965328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.941 [2024-05-12 07:06:19.983580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.941 [2024-05-12 07:06:19.983895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.941 [2024-05-12 07:06:19.983924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.941 [2024-05-12 07:06:20.002458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.941 [2024-05-12 07:06:20.003202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.941 [2024-05-12 07:06:20.003232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.941 [2024-05-12 07:06:20.021147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15203e0) with pdu=0x2000190fef90 00:26:12.941 [2024-05-12 07:06:20.021365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.941 [2024-05-12 07:06:20.021399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.941 00:26:12.941 Latency(us) 00:26:12.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.941 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:12.941 nvme0n1 : 2.01 1643.01 205.38 0.00 0.00 9714.58 6893.42 21165.70 00:26:12.941 =================================================================================================================== 00:26:12.941 Total : 1643.01 205.38 0.00 0.00 9714.58 6893.42 21165.70 00:26:12.941 0 00:26:12.941 07:06:20 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:12.941 07:06:20 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:12.941 07:06:20 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:12.941 | .driver_specific 00:26:12.941 | .nvme_error 00:26:12.941 | .status_code 00:26:12.941 | .command_transient_transport_error' 00:26:12.941 07:06:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:13.200 07:06:20 -- host/digest.sh@71 -- # (( 106 > 0 )) 00:26:13.200 07:06:20 -- host/digest.sh@73 -- # killprocess 3140401 00:26:13.200 07:06:20 -- common/autotest_common.sh@926 -- # '[' -z 3140401 ']' 00:26:13.200 07:06:20 -- common/autotest_common.sh@930 -- # kill -0 3140401 00:26:13.200 07:06:20 -- common/autotest_common.sh@931 -- # uname 00:26:13.200 07:06:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:13.200 07:06:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3140401 00:26:13.462 07:06:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:13.462 07:06:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:13.462 07:06:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3140401' 00:26:13.462 killing process with pid 3140401 00:26:13.462 07:06:20 -- common/autotest_common.sh@945 -- # kill 3140401 00:26:13.462 Received shutdown signal, test time was about 2.000000 seconds 00:26:13.462 00:26:13.462 Latency(us) 00:26:13.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.462 =================================================================================================================== 00:26:13.462 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.462 07:06:20 -- common/autotest_common.sh@950 -- # wait 3140401 00:26:13.721 07:06:20 -- host/digest.sh@115 -- # killprocess 3138219 00:26:13.722 07:06:20 -- common/autotest_common.sh@926 -- # '[' -z 3138219 ']' 00:26:13.722 07:06:20 -- common/autotest_common.sh@930 -- # kill -0 3138219 00:26:13.722 07:06:20 -- common/autotest_common.sh@931 -- # uname 00:26:13.722 07:06:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:13.722 07:06:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3138219 00:26:13.722 07:06:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:13.722 07:06:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:13.722 07:06:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3138219' 00:26:13.722 killing process with pid 3138219 00:26:13.722 07:06:20 -- common/autotest_common.sh@945 -- # kill 3138219 00:26:13.722 07:06:20 -- common/autotest_common.sh@950 -- # wait 3138219 00:26:13.979 00:26:13.979 real 0m18.183s 00:26:13.979 user 0m37.092s 00:26:13.979 sys 0m4.105s 00:26:13.979 07:06:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.979 07:06:20 -- common/autotest_common.sh@10 -- # set +x 00:26:13.979 ************************************ 00:26:13.979 END TEST nvmf_digest_error 00:26:13.979 ************************************ 00:26:13.979 07:06:20 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:26:13.980 07:06:20 -- host/digest.sh@139 -- # nvmftestfini 00:26:13.980 07:06:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:13.980 07:06:20 -- nvmf/common.sh@116 -- # sync 00:26:13.980 07:06:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:13.980 07:06:20 -- nvmf/common.sh@119 -- # set +e 00:26:13.980 07:06:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:13.980 07:06:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:13.980 rmmod nvme_tcp 00:26:13.980 rmmod nvme_fabrics 00:26:13.980 rmmod nvme_keyring 00:26:13.980 07:06:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:13.980 07:06:20 -- nvmf/common.sh@123 -- # set -e 00:26:13.980 07:06:20 -- nvmf/common.sh@124 -- # return 0 00:26:13.980 07:06:20 -- nvmf/common.sh@477 -- # '[' -n 3138219 ']' 00:26:13.980 07:06:20 -- nvmf/common.sh@478 -- # killprocess 3138219 00:26:13.980 07:06:20 -- common/autotest_common.sh@926 -- # '[' -z 3138219 ']' 00:26:13.980 07:06:20 -- common/autotest_common.sh@930 -- # kill -0 3138219 00:26:13.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3138219) - No such process 00:26:13.980 07:06:20 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3138219 is not found' 00:26:13.980 Process with pid 3138219 is not found 00:26:13.980 07:06:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:13.980 07:06:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:13.980 07:06:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:13.980 07:06:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.980 07:06:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:13.980 07:06:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.980 07:06:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.980 07:06:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.512 07:06:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:16.512 00:26:16.512 real 0m38.686s 00:26:16.512 user 1m9.500s 00:26:16.512 sys 0m9.706s 00:26:16.512 07:06:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:16.512 07:06:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.512 ************************************ 00:26:16.512 END TEST nvmf_digest 00:26:16.512 ************************************ 00:26:16.512 07:06:23 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:26:16.512 07:06:23 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:26:16.512 07:06:23 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:26:16.512 07:06:23 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:16.512 07:06:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:16.512 07:06:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:16.512 07:06:23 -- common/autotest_common.sh@10 -- # set +x 00:26:16.512 ************************************ 00:26:16.512 START TEST nvmf_bdevperf 00:26:16.512 ************************************ 00:26:16.512 07:06:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:16.512 * Looking for test storage... 00:26:16.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:16.512 07:06:23 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.512 07:06:23 -- nvmf/common.sh@7 -- # uname -s 00:26:16.512 07:06:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.512 07:06:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.512 07:06:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.512 07:06:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.512 07:06:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.512 07:06:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.512 07:06:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.512 07:06:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.512 07:06:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.512 07:06:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.512 07:06:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:16.512 07:06:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:16.512 07:06:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.512 07:06:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.512 07:06:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.512 07:06:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.512 07:06:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.512 07:06:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.512 07:06:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.512 07:06:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.512 07:06:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.512 07:06:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.512 07:06:23 -- paths/export.sh@5 -- # export PATH 00:26:16.512 07:06:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.512 07:06:23 -- nvmf/common.sh@46 -- # : 0 00:26:16.512 07:06:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:16.512 07:06:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:16.512 07:06:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:16.512 07:06:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.512 07:06:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.512 07:06:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:16.512 07:06:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:16.512 07:06:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:16.512 07:06:23 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:16.512 07:06:23 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:16.512 07:06:23 -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:16.512 07:06:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:16.512 07:06:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.512 07:06:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:16.512 07:06:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:16.512 07:06:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:16.512 07:06:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.512 07:06:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.512 07:06:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.512 07:06:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:16.512 07:06:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:16.512 07:06:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:16.512 07:06:23 -- common/autotest_common.sh@10 -- # set +x 00:26:17.887 07:06:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:17.887 07:06:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:17.887 07:06:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:17.887 07:06:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:17.887 07:06:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:17.887 07:06:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:17.887 07:06:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:17.887 07:06:24 -- nvmf/common.sh@294 -- # net_devs=() 00:26:17.887 07:06:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:17.887 07:06:24 -- nvmf/common.sh@295 -- # e810=() 00:26:17.887 07:06:24 -- nvmf/common.sh@295 -- # local -ga e810 00:26:17.887 07:06:24 -- nvmf/common.sh@296 -- # x722=() 00:26:17.887 07:06:24 -- nvmf/common.sh@296 -- # local -ga x722 00:26:17.887 07:06:24 -- nvmf/common.sh@297 -- # mlx=() 00:26:17.887 07:06:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:17.887 07:06:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.887 07:06:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.887 07:06:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.887 07:06:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.887 07:06:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.887 07:06:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.887 07:06:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.887 07:06:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.887 07:06:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.887 07:06:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.887 07:06:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.887 07:06:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:17.887 07:06:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:17.887 07:06:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:17.887 07:06:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:17.887 07:06:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:17.887 07:06:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:17.888 07:06:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:17.888 07:06:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:17.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:17.888 07:06:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:17.888 07:06:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:17.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:17.888 07:06:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:17.888 07:06:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:17.888 07:06:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.888 07:06:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:17.888 07:06:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.888 07:06:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:17.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:17.888 07:06:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.888 07:06:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:17.888 07:06:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.888 07:06:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:17.888 07:06:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.888 07:06:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:17.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:17.888 07:06:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.888 07:06:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:17.888 07:06:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:17.888 07:06:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:17.888 07:06:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:17.888 07:06:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.888 07:06:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.888 07:06:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.888 07:06:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:17.888 07:06:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.888 07:06:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.888 07:06:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:17.888 07:06:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.888 07:06:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.888 07:06:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:17.888 07:06:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:17.888 07:06:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.888 07:06:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.146 07:06:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.146 07:06:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.146 07:06:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:18.146 07:06:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.146 07:06:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.146 07:06:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.146 07:06:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:18.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:26:18.146 00:26:18.146 --- 10.0.0.2 ping statistics --- 00:26:18.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.146 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:26:18.146 07:06:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:26:18.146 00:26:18.146 --- 10.0.0.1 ping statistics --- 00:26:18.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.146 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:26:18.146 07:06:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.146 07:06:25 -- nvmf/common.sh@410 -- # return 0 00:26:18.146 07:06:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:18.146 07:06:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.146 07:06:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:18.146 07:06:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:18.146 07:06:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.146 07:06:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:18.146 07:06:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:18.146 07:06:25 -- host/bdevperf.sh@25 -- # tgt_init 00:26:18.146 07:06:25 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:18.146 07:06:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:18.146 07:06:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:18.146 07:06:25 -- common/autotest_common.sh@10 -- # set +x 00:26:18.146 07:06:25 -- nvmf/common.sh@469 -- # nvmfpid=3142910 00:26:18.146 07:06:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:18.146 07:06:25 -- nvmf/common.sh@470 -- # waitforlisten 3142910 00:26:18.146 07:06:25 -- common/autotest_common.sh@819 -- # '[' -z 3142910 ']' 00:26:18.146 07:06:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.146 07:06:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:18.146 07:06:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.146 07:06:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:18.146 07:06:25 -- common/autotest_common.sh@10 -- # set +x 00:26:18.146 [2024-05-12 07:06:25.165829] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:18.146 [2024-05-12 07:06:25.165905] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.146 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.146 [2024-05-12 07:06:25.237457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:18.404 [2024-05-12 07:06:25.354240] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:18.404 [2024-05-12 07:06:25.354403] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.404 [2024-05-12 07:06:25.354424] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.404 [2024-05-12 07:06:25.354439] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.404 [2024-05-12 07:06:25.354535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.404 [2024-05-12 07:06:25.356714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:18.404 [2024-05-12 07:06:25.356726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.339 07:06:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:19.339 07:06:26 -- common/autotest_common.sh@852 -- # return 0 00:26:19.339 07:06:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:19.339 07:06:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:19.339 07:06:26 -- common/autotest_common.sh@10 -- # set +x 00:26:19.339 07:06:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.339 07:06:26 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:19.339 07:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.339 07:06:26 -- common/autotest_common.sh@10 -- # set +x 00:26:19.339 [2024-05-12 07:06:26.175387] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.339 07:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.339 07:06:26 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:19.339 07:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.339 07:06:26 -- common/autotest_common.sh@10 -- # set +x 00:26:19.339 Malloc0 00:26:19.339 07:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.339 07:06:26 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:19.339 07:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.339 07:06:26 -- common/autotest_common.sh@10 -- # set +x 00:26:19.339 07:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.339 07:06:26 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:19.339 07:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.339 07:06:26 -- common/autotest_common.sh@10 -- # set +x 00:26:19.339 07:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.339 07:06:26 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:19.339 07:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.339 07:06:26 -- common/autotest_common.sh@10 -- # set +x 00:26:19.339 [2024-05-12 07:06:26.232767] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.339 07:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.339 07:06:26 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:19.339 07:06:26 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:19.339 07:06:26 -- nvmf/common.sh@520 -- # config=() 00:26:19.339 07:06:26 -- nvmf/common.sh@520 -- # local subsystem config 00:26:19.339 07:06:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:19.339 07:06:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:19.339 { 00:26:19.339 "params": { 00:26:19.339 "name": "Nvme$subsystem", 00:26:19.339 "trtype": "$TEST_TRANSPORT", 00:26:19.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:19.339 "adrfam": "ipv4", 00:26:19.339 "trsvcid": "$NVMF_PORT", 00:26:19.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:19.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:19.340 "hdgst": ${hdgst:-false}, 00:26:19.340 "ddgst": ${ddgst:-false} 00:26:19.340 }, 00:26:19.340 "method": "bdev_nvme_attach_controller" 00:26:19.340 } 00:26:19.340 EOF 00:26:19.340 )") 00:26:19.340 07:06:26 -- nvmf/common.sh@542 -- # cat 00:26:19.340 07:06:26 -- nvmf/common.sh@544 -- # jq . 00:26:19.340 07:06:26 -- nvmf/common.sh@545 -- # IFS=, 00:26:19.340 07:06:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:19.340 "params": { 00:26:19.340 "name": "Nvme1", 00:26:19.340 "trtype": "tcp", 00:26:19.340 "traddr": "10.0.0.2", 00:26:19.340 "adrfam": "ipv4", 00:26:19.340 "trsvcid": "4420", 00:26:19.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:19.340 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:19.340 "hdgst": false, 00:26:19.340 "ddgst": false 00:26:19.340 }, 00:26:19.340 "method": "bdev_nvme_attach_controller" 00:26:19.340 }' 00:26:19.340 [2024-05-12 07:06:26.276912] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:19.340 [2024-05-12 07:06:26.276978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143066 ] 00:26:19.340 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.340 [2024-05-12 07:06:26.336827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.340 [2024-05-12 07:06:26.447192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.599 Running I/O for 1 seconds... 00:26:20.978 00:26:20.978 Latency(us) 00:26:20.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.978 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:20.978 Verification LBA range: start 0x0 length 0x4000 00:26:20.978 Nvme1n1 : 1.00 13107.16 51.20 0.00 0.00 9730.13 1086.20 16117.00 00:26:20.978 =================================================================================================================== 00:26:20.978 Total : 13107.16 51.20 0.00 0.00 9730.13 1086.20 16117.00 00:26:20.978 07:06:27 -- host/bdevperf.sh@30 -- # bdevperfpid=3143335 00:26:20.978 07:06:27 -- host/bdevperf.sh@32 -- # sleep 3 00:26:20.978 07:06:27 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:20.978 07:06:27 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:20.978 07:06:27 -- nvmf/common.sh@520 -- # config=() 00:26:20.978 07:06:27 -- nvmf/common.sh@520 -- # local subsystem config 00:26:20.978 07:06:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:20.978 07:06:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:20.978 { 00:26:20.978 "params": { 00:26:20.978 "name": "Nvme$subsystem", 00:26:20.978 "trtype": "$TEST_TRANSPORT", 00:26:20.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.978 "adrfam": "ipv4", 00:26:20.978 "trsvcid": "$NVMF_PORT", 00:26:20.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.978 "hdgst": ${hdgst:-false}, 00:26:20.978 "ddgst": ${ddgst:-false} 00:26:20.978 }, 00:26:20.978 "method": "bdev_nvme_attach_controller" 00:26:20.978 } 00:26:20.978 EOF 00:26:20.978 )") 00:26:20.978 07:06:27 -- nvmf/common.sh@542 -- # cat 00:26:20.978 07:06:27 -- nvmf/common.sh@544 -- # jq . 00:26:20.978 07:06:27 -- nvmf/common.sh@545 -- # IFS=, 00:26:20.978 07:06:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:20.978 "params": { 00:26:20.978 "name": "Nvme1", 00:26:20.978 "trtype": "tcp", 00:26:20.978 "traddr": "10.0.0.2", 00:26:20.978 "adrfam": "ipv4", 00:26:20.978 "trsvcid": "4420", 00:26:20.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:20.978 "hdgst": false, 00:26:20.978 "ddgst": false 00:26:20.978 }, 00:26:20.978 "method": "bdev_nvme_attach_controller" 00:26:20.978 }' 00:26:20.978 [2024-05-12 07:06:27.976944] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:20.978 [2024-05-12 07:06:27.977036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143335 ] 00:26:20.978 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.978 [2024-05-12 07:06:28.037040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.237 [2024-05-12 07:06:28.142876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.495 Running I/O for 15 seconds... 00:26:24.033 07:06:30 -- host/bdevperf.sh@33 -- # kill -9 3142910 00:26:24.033 07:06:30 -- host/bdevperf.sh@35 -- # sleep 3 00:26:24.033 [2024-05-12 07:06:30.949387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.949965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.949993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.033 [2024-05-12 07:06:30.950525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.033 [2024-05-12 07:06:30.950623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.033 [2024-05-12 07:06:30.950701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.033 [2024-05-12 07:06:30.950721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.950737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.950770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.950785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.950800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.950815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.950830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.950845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.950860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.950874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.950890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.950904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.950919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.950933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.950949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.950963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.951254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.951286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.951318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.951351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.951383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.951690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.951732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.951778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.951867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.951896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-05-12 07:06:30.951929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.951959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.951975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.952016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.952033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.952055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.952072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.952088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.034 [2024-05-12 07:06:30.952105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.034 [2024-05-12 07:06:30.952120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-05-12 07:06:30.952153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-05-12 07:06:30.952186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-05-12 07:06:30.952551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-05-12 07:06:30.952583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-05-12 07:06:30.952615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-05-12 07:06:30.952647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.952961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.952992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-05-12 07:06:30.953140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-05-12 07:06:30.953173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-05-12 07:06:30.953469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.035 [2024-05-12 07:06:30.953486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.035 [2024-05-12 07:06:30.953501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.953518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.036 [2024-05-12 07:06:30.953534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.953550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.036 [2024-05-12 07:06:30.953566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.953583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.036 [2024-05-12 07:06:30.953599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.953616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.036 [2024-05-12 07:06:30.953631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.953653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.036 [2024-05-12 07:06:30.953669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.953692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.036 [2024-05-12 07:06:30.953716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.953734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.036 [2024-05-12 07:06:30.953765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.953782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.036 [2024-05-12 07:06:30.953796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.953812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.036 [2024-05-12 07:06:30.953825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.953841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.036 [2024-05-12 07:06:30.953855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.953869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef13a0 is same with the state(5) to be set 00:26:24.036 [2024-05-12 07:06:30.953886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.036 [2024-05-12 07:06:30.953899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.036 [2024-05-12 07:06:30.953911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126904 len:8 PRP1 0x0 PRP2 0x0 00:26:24.036 [2024-05-12 07:06:30.953924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.954004] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xef13a0 was disconnected and freed. reset controller. 00:26:24.036 [2024-05-12 07:06:30.954092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.036 [2024-05-12 07:06:30.954116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.954133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.036 [2024-05-12 07:06:30.954149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.954164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.036 [2024-05-12 07:06:30.954179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.954204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.036 [2024-05-12 07:06:30.954218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.036 [2024-05-12 07:06:30.954238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.036 [2024-05-12 07:06:30.956556] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.036 [2024-05-12 07:06:30.956597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.036 [2024-05-12 07:06:30.957148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-05-12 07:06:30.957410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-05-12 07:06:30.957439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.036 [2024-05-12 07:06:30.957457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.036 [2024-05-12 07:06:30.957661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.036 [2024-05-12 07:06:30.957869] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.036 [2024-05-12 07:06:30.957892] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.036 [2024-05-12 07:06:30.957910] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.036 [2024-05-12 07:06:30.960336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.036 [2024-05-12 07:06:30.969312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.036 [2024-05-12 07:06:30.969752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-05-12 07:06:30.969988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-05-12 07:06:30.970014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.036 [2024-05-12 07:06:30.970046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.036 [2024-05-12 07:06:30.970211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.036 [2024-05-12 07:06:30.970381] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.036 [2024-05-12 07:06:30.970405] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.036 [2024-05-12 07:06:30.970421] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.036 [2024-05-12 07:06:30.972864] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.036 [2024-05-12 07:06:30.981733] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.036 [2024-05-12 07:06:30.982104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-05-12 07:06:30.982344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-05-12 07:06:30.982391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.036 [2024-05-12 07:06:30.982409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.036 [2024-05-12 07:06:30.982575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.036 [2024-05-12 07:06:30.982794] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.036 [2024-05-12 07:06:30.982819] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.036 [2024-05-12 07:06:30.982836] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.036 [2024-05-12 07:06:30.985327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.036 [2024-05-12 07:06:30.994341] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.036 [2024-05-12 07:06:30.994769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-05-12 07:06:30.994947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-05-12 07:06:30.994976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.036 [2024-05-12 07:06:30.994994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.036 [2024-05-12 07:06:30.995179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.036 [2024-05-12 07:06:30.995367] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.036 [2024-05-12 07:06:30.995391] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.036 [2024-05-12 07:06:30.995407] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.036 [2024-05-12 07:06:30.997549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.036 [2024-05-12 07:06:31.006825] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.036 [2024-05-12 07:06:31.007225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-05-12 07:06:31.007466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-05-12 07:06:31.007506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.036 [2024-05-12 07:06:31.007523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.036 [2024-05-12 07:06:31.007670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.036 [2024-05-12 07:06:31.007797] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.036 [2024-05-12 07:06:31.007822] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.036 [2024-05-12 07:06:31.007839] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.036 [2024-05-12 07:06:31.010178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.036 [2024-05-12 07:06:31.019502] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.036 [2024-05-12 07:06:31.019915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-05-12 07:06:31.020154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-05-12 07:06:31.020183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.036 [2024-05-12 07:06:31.020202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.036 [2024-05-12 07:06:31.020368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.037 [2024-05-12 07:06:31.020574] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.037 [2024-05-12 07:06:31.020600] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.037 [2024-05-12 07:06:31.020616] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.037 [2024-05-12 07:06:31.023001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.037 [2024-05-12 07:06:31.032249] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.037 [2024-05-12 07:06:31.032602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.032901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.032933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.037 [2024-05-12 07:06:31.032951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.037 [2024-05-12 07:06:31.033155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.037 [2024-05-12 07:06:31.033326] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.037 [2024-05-12 07:06:31.033351] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.037 [2024-05-12 07:06:31.033368] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.037 [2024-05-12 07:06:31.035674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.037 [2024-05-12 07:06:31.045107] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.037 [2024-05-12 07:06:31.045507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.045722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.045754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.037 [2024-05-12 07:06:31.045772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.037 [2024-05-12 07:06:31.045958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.037 [2024-05-12 07:06:31.046145] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.037 [2024-05-12 07:06:31.046170] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.037 [2024-05-12 07:06:31.046187] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.037 [2024-05-12 07:06:31.048493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.037 [2024-05-12 07:06:31.057620] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.037 [2024-05-12 07:06:31.057997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.058340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.058399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.037 [2024-05-12 07:06:31.058418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.037 [2024-05-12 07:06:31.058567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.037 [2024-05-12 07:06:31.058751] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.037 [2024-05-12 07:06:31.058777] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.037 [2024-05-12 07:06:31.058794] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.037 [2024-05-12 07:06:31.061149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.037 [2024-05-12 07:06:31.070180] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.037 [2024-05-12 07:06:31.070569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.070780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.070811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.037 [2024-05-12 07:06:31.070830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.037 [2024-05-12 07:06:31.070979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.037 [2024-05-12 07:06:31.071113] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.037 [2024-05-12 07:06:31.071136] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.037 [2024-05-12 07:06:31.071152] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.037 [2024-05-12 07:06:31.073478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.037 [2024-05-12 07:06:31.082799] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.037 [2024-05-12 07:06:31.083243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.083485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.083526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.037 [2024-05-12 07:06:31.083543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.037 [2024-05-12 07:06:31.083686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.037 [2024-05-12 07:06:31.083864] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.037 [2024-05-12 07:06:31.083889] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.037 [2024-05-12 07:06:31.083907] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.037 [2024-05-12 07:06:31.086216] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.037 [2024-05-12 07:06:31.095446] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.037 [2024-05-12 07:06:31.095815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.096000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.096029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.037 [2024-05-12 07:06:31.096047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.037 [2024-05-12 07:06:31.096197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.037 [2024-05-12 07:06:31.096348] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.037 [2024-05-12 07:06:31.096373] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.037 [2024-05-12 07:06:31.096390] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.037 [2024-05-12 07:06:31.098573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.037 [2024-05-12 07:06:31.108144] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.037 [2024-05-12 07:06:31.108550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.108840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.108877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.037 [2024-05-12 07:06:31.108897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.037 [2024-05-12 07:06:31.109082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.037 [2024-05-12 07:06:31.109251] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.037 [2024-05-12 07:06:31.109277] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.037 [2024-05-12 07:06:31.109293] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.037 [2024-05-12 07:06:31.111581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.037 [2024-05-12 07:06:31.120631] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.037 [2024-05-12 07:06:31.121020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.121230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-05-12 07:06:31.121260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.038 [2024-05-12 07:06:31.121278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.038 [2024-05-12 07:06:31.121481] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.038 [2024-05-12 07:06:31.121653] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.038 [2024-05-12 07:06:31.121678] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.038 [2024-05-12 07:06:31.121707] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.038 [2024-05-12 07:06:31.124039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.038 [2024-05-12 07:06:31.133302] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.038 [2024-05-12 07:06:31.133671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.038 [2024-05-12 07:06:31.133892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.038 [2024-05-12 07:06:31.133920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.038 [2024-05-12 07:06:31.133952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.038 [2024-05-12 07:06:31.134098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.038 [2024-05-12 07:06:31.134286] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.038 [2024-05-12 07:06:31.134311] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.038 [2024-05-12 07:06:31.134328] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.038 [2024-05-12 07:06:31.136527] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.038 [2024-05-12 07:06:31.146161] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.038 [2024-05-12 07:06:31.146565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.038 [2024-05-12 07:06:31.146776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.038 [2024-05-12 07:06:31.146823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.038 [2024-05-12 07:06:31.146847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.038 [2024-05-12 07:06:31.147032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.038 [2024-05-12 07:06:31.147184] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.038 [2024-05-12 07:06:31.147209] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.038 [2024-05-12 07:06:31.147226] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.038 [2024-05-12 07:06:31.149607] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.300 [2024-05-12 07:06:31.158754] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.300 [2024-05-12 07:06:31.159162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.300 [2024-05-12 07:06:31.159362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.300 [2024-05-12 07:06:31.159390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.300 [2024-05-12 07:06:31.159408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.300 [2024-05-12 07:06:31.159629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.300 [2024-05-12 07:06:31.159833] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.300 [2024-05-12 07:06:31.159860] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.300 [2024-05-12 07:06:31.159876] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.300 [2024-05-12 07:06:31.162252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.300 [2024-05-12 07:06:31.171515] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.300 [2024-05-12 07:06:31.171945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.300 [2024-05-12 07:06:31.172305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.300 [2024-05-12 07:06:31.172356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.300 [2024-05-12 07:06:31.172373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.300 [2024-05-12 07:06:31.172557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.300 [2024-05-12 07:06:31.172741] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.300 [2024-05-12 07:06:31.172766] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.300 [2024-05-12 07:06:31.172782] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.300 [2024-05-12 07:06:31.175230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.300 [2024-05-12 07:06:31.184119] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.300 [2024-05-12 07:06:31.184543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.300 [2024-05-12 07:06:31.184759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.300 [2024-05-12 07:06:31.184786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.300 [2024-05-12 07:06:31.184819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.300 [2024-05-12 07:06:31.184976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.300 [2024-05-12 07:06:31.185163] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.300 [2024-05-12 07:06:31.185188] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.300 [2024-05-12 07:06:31.185205] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.300 [2024-05-12 07:06:31.187530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.300 [2024-05-12 07:06:31.196710] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.300 [2024-05-12 07:06:31.197185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.300 [2024-05-12 07:06:31.197550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.300 [2024-05-12 07:06:31.197580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.300 [2024-05-12 07:06:31.197598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.300 [2024-05-12 07:06:31.197798] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.300 [2024-05-12 07:06:31.197986] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.300 [2024-05-12 07:06:31.198012] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.300 [2024-05-12 07:06:31.198028] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.300 [2024-05-12 07:06:31.200496] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.300 [2024-05-12 07:06:31.209253] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.300 [2024-05-12 07:06:31.209585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.300 [2024-05-12 07:06:31.209767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.300 [2024-05-12 07:06:31.209794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.300 [2024-05-12 07:06:31.209810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.300 [2024-05-12 07:06:31.209982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.300 [2024-05-12 07:06:31.210115] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.300 [2024-05-12 07:06:31.210138] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.301 [2024-05-12 07:06:31.210155] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.301 [2024-05-12 07:06:31.212585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.301 [2024-05-12 07:06:31.221957] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.301 [2024-05-12 07:06:31.222311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.222612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.222660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.301 [2024-05-12 07:06:31.222679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.301 [2024-05-12 07:06:31.222819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.301 [2024-05-12 07:06:31.223014] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.301 [2024-05-12 07:06:31.223040] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.301 [2024-05-12 07:06:31.223056] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.301 [2024-05-12 07:06:31.225416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.301 [2024-05-12 07:06:31.234544] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.301 [2024-05-12 07:06:31.234940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.235131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.235159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.301 [2024-05-12 07:06:31.235175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.301 [2024-05-12 07:06:31.235339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.301 [2024-05-12 07:06:31.235509] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.301 [2024-05-12 07:06:31.235535] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.301 [2024-05-12 07:06:31.235552] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.301 [2024-05-12 07:06:31.237909] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.301 [2024-05-12 07:06:31.247273] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.301 [2024-05-12 07:06:31.247676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.247877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.247902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.301 [2024-05-12 07:06:31.247918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.301 [2024-05-12 07:06:31.248105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.301 [2024-05-12 07:06:31.248268] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.301 [2024-05-12 07:06:31.248291] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.301 [2024-05-12 07:06:31.248319] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.301 [2024-05-12 07:06:31.250680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.301 [2024-05-12 07:06:31.259863] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.301 [2024-05-12 07:06:31.260280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.260430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.260475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.301 [2024-05-12 07:06:31.260493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.301 [2024-05-12 07:06:31.260660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.301 [2024-05-12 07:06:31.260816] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.301 [2024-05-12 07:06:31.260844] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.301 [2024-05-12 07:06:31.260860] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.301 [2024-05-12 07:06:31.262988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.301 [2024-05-12 07:06:31.272603] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.301 [2024-05-12 07:06:31.273000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.273263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.273292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.301 [2024-05-12 07:06:31.273310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.301 [2024-05-12 07:06:31.273494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.301 [2024-05-12 07:06:31.273664] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.301 [2024-05-12 07:06:31.273690] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.301 [2024-05-12 07:06:31.273726] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.301 [2024-05-12 07:06:31.276112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.301 [2024-05-12 07:06:31.285351] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.301 [2024-05-12 07:06:31.285851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.286033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.286073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.301 [2024-05-12 07:06:31.286089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.301 [2024-05-12 07:06:31.286269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.301 [2024-05-12 07:06:31.286421] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.301 [2024-05-12 07:06:31.286445] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.301 [2024-05-12 07:06:31.286461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.301 [2024-05-12 07:06:31.288737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.301 [2024-05-12 07:06:31.297928] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.301 [2024-05-12 07:06:31.298298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.298522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.298568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.301 [2024-05-12 07:06:31.298586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.301 [2024-05-12 07:06:31.298819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.301 [2024-05-12 07:06:31.298974] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.301 [2024-05-12 07:06:31.299014] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.301 [2024-05-12 07:06:31.299036] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.301 [2024-05-12 07:06:31.301302] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.301 [2024-05-12 07:06:31.310512] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.301 [2024-05-12 07:06:31.310942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.311124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.311153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.301 [2024-05-12 07:06:31.311171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.301 [2024-05-12 07:06:31.311318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.301 [2024-05-12 07:06:31.311488] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.301 [2024-05-12 07:06:31.311514] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.301 [2024-05-12 07:06:31.311530] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.301 [2024-05-12 07:06:31.313766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.301 [2024-05-12 07:06:31.323240] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.301 [2024-05-12 07:06:31.323671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.323905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.323934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.301 [2024-05-12 07:06:31.323952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.301 [2024-05-12 07:06:31.324154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.301 [2024-05-12 07:06:31.324378] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.301 [2024-05-12 07:06:31.324403] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.301 [2024-05-12 07:06:31.324430] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.301 [2024-05-12 07:06:31.326758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.301 [2024-05-12 07:06:31.335756] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.301 [2024-05-12 07:06:31.336164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.336409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.301 [2024-05-12 07:06:31.336456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.302 [2024-05-12 07:06:31.336475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.302 [2024-05-12 07:06:31.336623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.302 [2024-05-12 07:06:31.336821] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.302 [2024-05-12 07:06:31.336846] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.302 [2024-05-12 07:06:31.336862] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.302 [2024-05-12 07:06:31.339267] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.302 [2024-05-12 07:06:31.348514] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.302 [2024-05-12 07:06:31.348948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.349128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.349154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.302 [2024-05-12 07:06:31.349170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.302 [2024-05-12 07:06:31.349361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.302 [2024-05-12 07:06:31.349538] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.302 [2024-05-12 07:06:31.349564] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.302 [2024-05-12 07:06:31.349580] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.302 [2024-05-12 07:06:31.351999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.302 [2024-05-12 07:06:31.361042] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.302 [2024-05-12 07:06:31.361395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.361625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.361650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.302 [2024-05-12 07:06:31.361666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.302 [2024-05-12 07:06:31.361887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.302 [2024-05-12 07:06:31.362058] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.302 [2024-05-12 07:06:31.362084] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.302 [2024-05-12 07:06:31.362100] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.302 [2024-05-12 07:06:31.364387] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.302 [2024-05-12 07:06:31.373790] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.302 [2024-05-12 07:06:31.374161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.374404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.374454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.302 [2024-05-12 07:06:31.374474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.302 [2024-05-12 07:06:31.374605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.302 [2024-05-12 07:06:31.374843] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.302 [2024-05-12 07:06:31.374869] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.302 [2024-05-12 07:06:31.374885] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.302 [2024-05-12 07:06:31.377170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.302 [2024-05-12 07:06:31.386419] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.302 [2024-05-12 07:06:31.386799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.387083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.387132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.302 [2024-05-12 07:06:31.387150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.302 [2024-05-12 07:06:31.387315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.302 [2024-05-12 07:06:31.387484] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.302 [2024-05-12 07:06:31.387508] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.302 [2024-05-12 07:06:31.387523] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.302 [2024-05-12 07:06:31.389985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.302 [2024-05-12 07:06:31.399102] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.302 [2024-05-12 07:06:31.399490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.399716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.399746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.302 [2024-05-12 07:06:31.399765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.302 [2024-05-12 07:06:31.399913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.302 [2024-05-12 07:06:31.400064] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.302 [2024-05-12 07:06:31.400089] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.302 [2024-05-12 07:06:31.400106] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.302 [2024-05-12 07:06:31.402557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.302 [2024-05-12 07:06:31.411673] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.302 [2024-05-12 07:06:31.412068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.412326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.412372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.302 [2024-05-12 07:06:31.412390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.302 [2024-05-12 07:06:31.412557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.302 [2024-05-12 07:06:31.412741] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.302 [2024-05-12 07:06:31.412768] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.302 [2024-05-12 07:06:31.412784] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.302 [2024-05-12 07:06:31.415143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.302 [2024-05-12 07:06:31.424034] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.302 [2024-05-12 07:06:31.424361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.424599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.302 [2024-05-12 07:06:31.424648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.302 [2024-05-12 07:06:31.424666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.302 [2024-05-12 07:06:31.424829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.302 [2024-05-12 07:06:31.425017] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.302 [2024-05-12 07:06:31.425043] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.302 [2024-05-12 07:06:31.425059] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.564 [2024-05-12 07:06:31.427473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.564 [2024-05-12 07:06:31.436669] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.564 [2024-05-12 07:06:31.437078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.564 [2024-05-12 07:06:31.437277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.564 [2024-05-12 07:06:31.437307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.564 [2024-05-12 07:06:31.437325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.564 [2024-05-12 07:06:31.437547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.564 [2024-05-12 07:06:31.437769] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.564 [2024-05-12 07:06:31.437796] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.564 [2024-05-12 07:06:31.437812] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.564 [2024-05-12 07:06:31.440097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.564 [2024-05-12 07:06:31.449080] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.564 [2024-05-12 07:06:31.449421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.564 [2024-05-12 07:06:31.449671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.564 [2024-05-12 07:06:31.449728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.564 [2024-05-12 07:06:31.449749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.564 [2024-05-12 07:06:31.449898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.564 [2024-05-12 07:06:31.450069] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.564 [2024-05-12 07:06:31.450094] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.564 [2024-05-12 07:06:31.450111] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.564 [2024-05-12 07:06:31.452324] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.564 [2024-05-12 07:06:31.461693] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.564 [2024-05-12 07:06:31.462043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.564 [2024-05-12 07:06:31.462337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.564 [2024-05-12 07:06:31.462391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.564 [2024-05-12 07:06:31.462410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.564 [2024-05-12 07:06:31.462577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.564 [2024-05-12 07:06:31.462816] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.564 [2024-05-12 07:06:31.462842] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.564 [2024-05-12 07:06:31.462858] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.564 [2024-05-12 07:06:31.465167] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.564 [2024-05-12 07:06:31.474094] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.564 [2024-05-12 07:06:31.474474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.564 [2024-05-12 07:06:31.474727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.564 [2024-05-12 07:06:31.474758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.564 [2024-05-12 07:06:31.474776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.564 [2024-05-12 07:06:31.474942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.564 [2024-05-12 07:06:31.475111] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.564 [2024-05-12 07:06:31.475135] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.564 [2024-05-12 07:06:31.475152] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.564 [2024-05-12 07:06:31.477283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.564 [2024-05-12 07:06:31.486769] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.564 [2024-05-12 07:06:31.487212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.564 [2024-05-12 07:06:31.487472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.564 [2024-05-12 07:06:31.487520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.564 [2024-05-12 07:06:31.487538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.564 [2024-05-12 07:06:31.487736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.564 [2024-05-12 07:06:31.487900] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.564 [2024-05-12 07:06:31.487925] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.564 [2024-05-12 07:06:31.487941] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.564 [2024-05-12 07:06:31.490313] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.564 [2024-05-12 07:06:31.499167] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.564 [2024-05-12 07:06:31.499544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.564 [2024-05-12 07:06:31.499788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.564 [2024-05-12 07:06:31.499820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.564 [2024-05-12 07:06:31.499844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.564 [2024-05-12 07:06:31.500011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.564 [2024-05-12 07:06:31.500199] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.564 [2024-05-12 07:06:31.500224] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.564 [2024-05-12 07:06:31.500240] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.564 [2024-05-12 07:06:31.502420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.565 [2024-05-12 07:06:31.511784] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.565 [2024-05-12 07:06:31.512145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.512442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.512504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.565 [2024-05-12 07:06:31.512522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.565 [2024-05-12 07:06:31.512652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.565 [2024-05-12 07:06:31.512851] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.565 [2024-05-12 07:06:31.512876] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.565 [2024-05-12 07:06:31.512892] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.565 [2024-05-12 07:06:31.515285] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.565 [2024-05-12 07:06:31.524507] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.565 [2024-05-12 07:06:31.524913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.525179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.525224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.565 [2024-05-12 07:06:31.525242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.565 [2024-05-12 07:06:31.525444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.565 [2024-05-12 07:06:31.525633] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.565 [2024-05-12 07:06:31.525658] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.565 [2024-05-12 07:06:31.525674] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.565 [2024-05-12 07:06:31.528020] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.565 [2024-05-12 07:06:31.537008] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.565 [2024-05-12 07:06:31.537456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.537633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.537662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.565 [2024-05-12 07:06:31.537680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.565 [2024-05-12 07:06:31.537860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.565 [2024-05-12 07:06:31.537995] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.565 [2024-05-12 07:06:31.538020] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.565 [2024-05-12 07:06:31.538036] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.565 [2024-05-12 07:06:31.540429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.565 [2024-05-12 07:06:31.549525] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.565 [2024-05-12 07:06:31.549895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.550128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.550155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.565 [2024-05-12 07:06:31.550186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.565 [2024-05-12 07:06:31.550368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.565 [2024-05-12 07:06:31.550538] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.565 [2024-05-12 07:06:31.550563] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.565 [2024-05-12 07:06:31.550579] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.565 [2024-05-12 07:06:31.552965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.565 [2024-05-12 07:06:31.562078] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.565 [2024-05-12 07:06:31.562486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.562680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.562720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.565 [2024-05-12 07:06:31.562740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.565 [2024-05-12 07:06:31.562943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.565 [2024-05-12 07:06:31.563095] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.565 [2024-05-12 07:06:31.563121] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.565 [2024-05-12 07:06:31.563138] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.565 [2024-05-12 07:06:31.565532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.565 [2024-05-12 07:06:31.574401] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.565 [2024-05-12 07:06:31.574816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.575017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.575042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.565 [2024-05-12 07:06:31.575076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.565 [2024-05-12 07:06:31.575243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.565 [2024-05-12 07:06:31.575413] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.565 [2024-05-12 07:06:31.575444] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.565 [2024-05-12 07:06:31.575461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.565 [2024-05-12 07:06:31.577681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.565 [2024-05-12 07:06:31.586978] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.565 [2024-05-12 07:06:31.587367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.587641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.587667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.565 [2024-05-12 07:06:31.587683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.565 [2024-05-12 07:06:31.587884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.565 [2024-05-12 07:06:31.588037] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.565 [2024-05-12 07:06:31.588063] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.565 [2024-05-12 07:06:31.588079] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.565 [2024-05-12 07:06:31.590432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.565 [2024-05-12 07:06:31.599689] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.565 [2024-05-12 07:06:31.600114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.600311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.600341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.565 [2024-05-12 07:06:31.600360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.565 [2024-05-12 07:06:31.600580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.565 [2024-05-12 07:06:31.600781] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.565 [2024-05-12 07:06:31.600807] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.565 [2024-05-12 07:06:31.600823] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.565 [2024-05-12 07:06:31.603198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.565 [2024-05-12 07:06:31.612226] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.565 [2024-05-12 07:06:31.612641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.612869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.612897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.565 [2024-05-12 07:06:31.612929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.565 [2024-05-12 07:06:31.613076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.565 [2024-05-12 07:06:31.613246] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.565 [2024-05-12 07:06:31.613272] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.565 [2024-05-12 07:06:31.613293] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.565 [2024-05-12 07:06:31.615475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.565 [2024-05-12 07:06:31.624651] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.565 [2024-05-12 07:06:31.625157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.625454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.565 [2024-05-12 07:06:31.625480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.565 [2024-05-12 07:06:31.625496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.565 [2024-05-12 07:06:31.625652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.565 [2024-05-12 07:06:31.625802] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.566 [2024-05-12 07:06:31.625827] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.566 [2024-05-12 07:06:31.625843] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.566 [2024-05-12 07:06:31.628166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.566 [2024-05-12 07:06:31.637328] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.566 [2024-05-12 07:06:31.637719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.566 [2024-05-12 07:06:31.637916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.566 [2024-05-12 07:06:31.637944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.566 [2024-05-12 07:06:31.637962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.566 [2024-05-12 07:06:31.638128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.566 [2024-05-12 07:06:31.638279] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.566 [2024-05-12 07:06:31.638305] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.566 [2024-05-12 07:06:31.638321] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.566 [2024-05-12 07:06:31.640856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.566 [2024-05-12 07:06:31.649927] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.566 [2024-05-12 07:06:31.650353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.566 [2024-05-12 07:06:31.650649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.566 [2024-05-12 07:06:31.650711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.566 [2024-05-12 07:06:31.650733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.566 [2024-05-12 07:06:31.650936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.566 [2024-05-12 07:06:31.651087] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.566 [2024-05-12 07:06:31.651113] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.566 [2024-05-12 07:06:31.651130] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.566 [2024-05-12 07:06:31.653657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.566 [2024-05-12 07:06:31.662501] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.566 [2024-05-12 07:06:31.662894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.566 [2024-05-12 07:06:31.663094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.566 [2024-05-12 07:06:31.663122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.566 [2024-05-12 07:06:31.663140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.566 [2024-05-12 07:06:31.663360] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.566 [2024-05-12 07:06:31.663567] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.566 [2024-05-12 07:06:31.663592] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.566 [2024-05-12 07:06:31.663608] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.566 [2024-05-12 07:06:31.665997] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.566 [2024-05-12 07:06:31.675114] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.566 [2024-05-12 07:06:31.675557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.566 [2024-05-12 07:06:31.675769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.566 [2024-05-12 07:06:31.675799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.566 [2024-05-12 07:06:31.675818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.566 [2024-05-12 07:06:31.676003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.566 [2024-05-12 07:06:31.676154] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.566 [2024-05-12 07:06:31.676179] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.566 [2024-05-12 07:06:31.676195] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.566 [2024-05-12 07:06:31.678538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.566 [2024-05-12 07:06:31.687606] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.566 [2024-05-12 07:06:31.688049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.566 [2024-05-12 07:06:31.688225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.566 [2024-05-12 07:06:31.688252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.566 [2024-05-12 07:06:31.688268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.566 [2024-05-12 07:06:31.688451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.566 [2024-05-12 07:06:31.688663] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.566 [2024-05-12 07:06:31.688688] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.566 [2024-05-12 07:06:31.688720] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.566 [2024-05-12 07:06:31.691247] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.828 [2024-05-12 07:06:31.700158] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.828 [2024-05-12 07:06:31.700498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.828 [2024-05-12 07:06:31.700727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.828 [2024-05-12 07:06:31.700757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.828 [2024-05-12 07:06:31.700775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.828 [2024-05-12 07:06:31.700905] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.828 [2024-05-12 07:06:31.701075] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.828 [2024-05-12 07:06:31.701100] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.828 [2024-05-12 07:06:31.701116] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.828 [2024-05-12 07:06:31.703565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.828 [2024-05-12 07:06:31.712671] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.828 [2024-05-12 07:06:31.713026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.828 [2024-05-12 07:06:31.713228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.828 [2024-05-12 07:06:31.713256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.829 [2024-05-12 07:06:31.713274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.829 [2024-05-12 07:06:31.713459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.829 [2024-05-12 07:06:31.713647] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.829 [2024-05-12 07:06:31.713672] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.829 [2024-05-12 07:06:31.713688] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.829 [2024-05-12 07:06:31.715840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.829 [2024-05-12 07:06:31.725041] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.829 [2024-05-12 07:06:31.725494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.725738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.725769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.829 [2024-05-12 07:06:31.725787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.829 [2024-05-12 07:06:31.725990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.829 [2024-05-12 07:06:31.726126] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.829 [2024-05-12 07:06:31.726151] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.829 [2024-05-12 07:06:31.726167] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.829 [2024-05-12 07:06:31.728454] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.829 [2024-05-12 07:06:31.737584] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.829 [2024-05-12 07:06:31.737969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.738195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.738225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.829 [2024-05-12 07:06:31.738243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.829 [2024-05-12 07:06:31.738391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.829 [2024-05-12 07:06:31.738560] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.829 [2024-05-12 07:06:31.738586] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.829 [2024-05-12 07:06:31.738602] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.829 [2024-05-12 07:06:31.740939] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.829 [2024-05-12 07:06:31.750142] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.829 [2024-05-12 07:06:31.750522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.750742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.750777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.829 [2024-05-12 07:06:31.750794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.829 [2024-05-12 07:06:31.750944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.829 [2024-05-12 07:06:31.751151] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.829 [2024-05-12 07:06:31.751177] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.829 [2024-05-12 07:06:31.751194] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.829 [2024-05-12 07:06:31.753467] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.829 [2024-05-12 07:06:31.762864] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.829 [2024-05-12 07:06:31.763263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.763502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.763531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.829 [2024-05-12 07:06:31.763549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.829 [2024-05-12 07:06:31.763679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.829 [2024-05-12 07:06:31.763844] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.829 [2024-05-12 07:06:31.763866] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.829 [2024-05-12 07:06:31.763880] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.829 [2024-05-12 07:06:31.766221] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.829 [2024-05-12 07:06:31.775336] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.829 [2024-05-12 07:06:31.775737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.775926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.775958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.829 [2024-05-12 07:06:31.775975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.829 [2024-05-12 07:06:31.776156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.829 [2024-05-12 07:06:31.776362] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.829 [2024-05-12 07:06:31.776388] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.829 [2024-05-12 07:06:31.776404] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.829 [2024-05-12 07:06:31.778582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.829 [2024-05-12 07:06:31.787929] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.829 [2024-05-12 07:06:31.788282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.788594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.788634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.829 [2024-05-12 07:06:31.788650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.829 [2024-05-12 07:06:31.788860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.829 [2024-05-12 07:06:31.789067] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.829 [2024-05-12 07:06:31.789092] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.829 [2024-05-12 07:06:31.789108] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.829 [2024-05-12 07:06:31.791448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.829 [2024-05-12 07:06:31.800523] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.829 [2024-05-12 07:06:31.800934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.801227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.801254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.829 [2024-05-12 07:06:31.801284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.829 [2024-05-12 07:06:31.801479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.829 [2024-05-12 07:06:31.801612] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.829 [2024-05-12 07:06:31.801638] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.829 [2024-05-12 07:06:31.801654] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.829 [2024-05-12 07:06:31.803846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.829 [2024-05-12 07:06:31.813067] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.829 [2024-05-12 07:06:31.813475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.813710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.813740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.829 [2024-05-12 07:06:31.813764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.829 [2024-05-12 07:06:31.813895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.829 [2024-05-12 07:06:31.814083] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.829 [2024-05-12 07:06:31.814108] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.829 [2024-05-12 07:06:31.814125] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.829 [2024-05-12 07:06:31.816428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.829 [2024-05-12 07:06:31.825522] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.829 [2024-05-12 07:06:31.825904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.826120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.829 [2024-05-12 07:06:31.826146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.829 [2024-05-12 07:06:31.826162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.829 [2024-05-12 07:06:31.826330] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.829 [2024-05-12 07:06:31.826540] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.829 [2024-05-12 07:06:31.826566] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.829 [2024-05-12 07:06:31.826582] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.829 [2024-05-12 07:06:31.828901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.830 [2024-05-12 07:06:31.838125] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.830 [2024-05-12 07:06:31.838514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.838721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.838750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.830 [2024-05-12 07:06:31.838768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.830 [2024-05-12 07:06:31.838970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.830 [2024-05-12 07:06:31.839086] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.830 [2024-05-12 07:06:31.839111] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.830 [2024-05-12 07:06:31.839127] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.830 [2024-05-12 07:06:31.841504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.830 [2024-05-12 07:06:31.850759] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.830 [2024-05-12 07:06:31.851339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.851637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.851663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.830 [2024-05-12 07:06:31.851679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.830 [2024-05-12 07:06:31.851867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.830 [2024-05-12 07:06:31.852053] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.830 [2024-05-12 07:06:31.852079] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.830 [2024-05-12 07:06:31.852095] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.830 [2024-05-12 07:06:31.854346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.830 [2024-05-12 07:06:31.863363] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.830 [2024-05-12 07:06:31.863812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.864023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.864050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.830 [2024-05-12 07:06:31.864066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.830 [2024-05-12 07:06:31.864266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.830 [2024-05-12 07:06:31.864416] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.830 [2024-05-12 07:06:31.864442] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.830 [2024-05-12 07:06:31.864458] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.830 [2024-05-12 07:06:31.866865] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.830 [2024-05-12 07:06:31.875876] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.830 [2024-05-12 07:06:31.876255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.876472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.876498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.830 [2024-05-12 07:06:31.876514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.830 [2024-05-12 07:06:31.876726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.830 [2024-05-12 07:06:31.876914] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.830 [2024-05-12 07:06:31.876942] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.830 [2024-05-12 07:06:31.876958] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.830 [2024-05-12 07:06:31.879351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.830 [2024-05-12 07:06:31.888479] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.830 [2024-05-12 07:06:31.888846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.889043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.889073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.830 [2024-05-12 07:06:31.889092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.830 [2024-05-12 07:06:31.889295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.830 [2024-05-12 07:06:31.889490] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.830 [2024-05-12 07:06:31.889516] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.830 [2024-05-12 07:06:31.889532] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.830 [2024-05-12 07:06:31.891848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.830 [2024-05-12 07:06:31.900934] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.830 [2024-05-12 07:06:31.901267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.901595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.901641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.830 [2024-05-12 07:06:31.901659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.830 [2024-05-12 07:06:31.901823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.830 [2024-05-12 07:06:31.901957] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.830 [2024-05-12 07:06:31.901981] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.830 [2024-05-12 07:06:31.901997] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.830 [2024-05-12 07:06:31.904263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.830 [2024-05-12 07:06:31.913408] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.830 [2024-05-12 07:06:31.913801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.913981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.914010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.830 [2024-05-12 07:06:31.914028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.830 [2024-05-12 07:06:31.914195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.830 [2024-05-12 07:06:31.914365] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.830 [2024-05-12 07:06:31.914391] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.830 [2024-05-12 07:06:31.914407] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.830 [2024-05-12 07:06:31.916779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.830 [2024-05-12 07:06:31.926209] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.830 [2024-05-12 07:06:31.926625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.926856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.926887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.830 [2024-05-12 07:06:31.926905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.830 [2024-05-12 07:06:31.927073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.830 [2024-05-12 07:06:31.927224] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.830 [2024-05-12 07:06:31.927258] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.830 [2024-05-12 07:06:31.927275] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.830 [2024-05-12 07:06:31.929734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.830 [2024-05-12 07:06:31.938807] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.830 [2024-05-12 07:06:31.939213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.939461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.939488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.830 [2024-05-12 07:06:31.939505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.830 [2024-05-12 07:06:31.939660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.830 [2024-05-12 07:06:31.939912] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.830 [2024-05-12 07:06:31.939938] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.830 [2024-05-12 07:06:31.939955] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.830 [2024-05-12 07:06:31.942424] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.830 [2024-05-12 07:06:31.951437] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.830 [2024-05-12 07:06:31.951810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.951982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.830 [2024-05-12 07:06:31.952010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:24.830 [2024-05-12 07:06:31.952028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:24.831 [2024-05-12 07:06:31.952176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:24.831 [2024-05-12 07:06:31.952346] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.831 [2024-05-12 07:06:31.952370] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.831 [2024-05-12 07:06:31.952386] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.831 [2024-05-12 07:06:31.954570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.090 [2024-05-12 07:06:31.964122] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.090 [2024-05-12 07:06:31.964668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-12 07:06:31.964925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-12 07:06:31.964953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.090 [2024-05-12 07:06:31.964972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.090 [2024-05-12 07:06:31.965156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.090 [2024-05-12 07:06:31.965326] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.090 [2024-05-12 07:06:31.965352] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.090 [2024-05-12 07:06:31.965374] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.090 [2024-05-12 07:06:31.967627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.090 [2024-05-12 07:06:31.976784] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.090 [2024-05-12 07:06:31.977161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-12 07:06:31.977371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-12 07:06:31.977396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.090 [2024-05-12 07:06:31.977413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.090 [2024-05-12 07:06:31.977569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.090 [2024-05-12 07:06:31.977796] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.090 [2024-05-12 07:06:31.977820] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.090 [2024-05-12 07:06:31.977834] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.090 [2024-05-12 07:06:31.980293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.090 [2024-05-12 07:06:31.989229] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.090 [2024-05-12 07:06:31.989618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-12 07:06:31.989923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-12 07:06:31.989954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.090 [2024-05-12 07:06:31.989973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.090 [2024-05-12 07:06:31.990158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.090 [2024-05-12 07:06:31.990363] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.090 [2024-05-12 07:06:31.990389] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.090 [2024-05-12 07:06:31.990406] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.090 [2024-05-12 07:06:31.992736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.091 [2024-05-12 07:06:32.001865] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.091 [2024-05-12 07:06:32.002225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.002428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.002456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.091 [2024-05-12 07:06:32.002475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.091 [2024-05-12 07:06:32.002677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.091 [2024-05-12 07:06:32.002823] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.091 [2024-05-12 07:06:32.002849] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.091 [2024-05-12 07:06:32.002866] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.091 [2024-05-12 07:06:32.005289] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.091 [2024-05-12 07:06:32.014460] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.091 [2024-05-12 07:06:32.014836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.015070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.015097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.091 [2024-05-12 07:06:32.015128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.091 [2024-05-12 07:06:32.015328] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.091 [2024-05-12 07:06:32.015516] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.091 [2024-05-12 07:06:32.015542] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.091 [2024-05-12 07:06:32.015558] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.091 [2024-05-12 07:06:32.017822] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.091 [2024-05-12 07:06:32.026974] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.091 [2024-05-12 07:06:32.027363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.027617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.027642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.091 [2024-05-12 07:06:32.027673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.091 [2024-05-12 07:06:32.027824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.091 [2024-05-12 07:06:32.027935] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.091 [2024-05-12 07:06:32.027974] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.091 [2024-05-12 07:06:32.027990] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.091 [2024-05-12 07:06:32.030476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.091 [2024-05-12 07:06:32.039502] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.091 [2024-05-12 07:06:32.039908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.040227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.040252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.091 [2024-05-12 07:06:32.040282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.091 [2024-05-12 07:06:32.040417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.091 [2024-05-12 07:06:32.040603] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.091 [2024-05-12 07:06:32.040628] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.091 [2024-05-12 07:06:32.040644] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.091 [2024-05-12 07:06:32.043009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.091 [2024-05-12 07:06:32.052086] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.091 [2024-05-12 07:06:32.052524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.052741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.052776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.091 [2024-05-12 07:06:32.052808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.091 [2024-05-12 07:06:32.052976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.091 [2024-05-12 07:06:32.053110] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.091 [2024-05-12 07:06:32.053134] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.091 [2024-05-12 07:06:32.053150] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.091 [2024-05-12 07:06:32.055434] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.091 [2024-05-12 07:06:32.064706] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.091 [2024-05-12 07:06:32.065076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.065265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.065291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.091 [2024-05-12 07:06:32.065307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.091 [2024-05-12 07:06:32.065496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.091 [2024-05-12 07:06:32.065690] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.091 [2024-05-12 07:06:32.065724] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.091 [2024-05-12 07:06:32.065741] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.091 [2024-05-12 07:06:32.068081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.091 [2024-05-12 07:06:32.077324] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.091 [2024-05-12 07:06:32.077807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.078081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.078109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.091 [2024-05-12 07:06:32.078128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.091 [2024-05-12 07:06:32.078312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.091 [2024-05-12 07:06:32.078518] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.091 [2024-05-12 07:06:32.078543] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.091 [2024-05-12 07:06:32.078559] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.091 [2024-05-12 07:06:32.080925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.091 [2024-05-12 07:06:32.089845] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.091 [2024-05-12 07:06:32.090196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.090528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.090579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.091 [2024-05-12 07:06:32.090597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.091 [2024-05-12 07:06:32.090811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.091 [2024-05-12 07:06:32.090965] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.091 [2024-05-12 07:06:32.090990] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.091 [2024-05-12 07:06:32.091005] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.091 [2024-05-12 07:06:32.093308] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.091 [2024-05-12 07:06:32.102373] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.091 [2024-05-12 07:06:32.102728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.102918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.102947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.091 [2024-05-12 07:06:32.102965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.091 [2024-05-12 07:06:32.103114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.091 [2024-05-12 07:06:32.103320] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.091 [2024-05-12 07:06:32.103344] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.091 [2024-05-12 07:06:32.103361] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.091 [2024-05-12 07:06:32.105630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.091 [2024-05-12 07:06:32.115091] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.091 [2024-05-12 07:06:32.115484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.115660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-12 07:06:32.115688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.091 [2024-05-12 07:06:32.115717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.091 [2024-05-12 07:06:32.115884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.092 [2024-05-12 07:06:32.116000] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.092 [2024-05-12 07:06:32.116025] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.092 [2024-05-12 07:06:32.116041] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.092 [2024-05-12 07:06:32.118362] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.092 [2024-05-12 07:06:32.127704] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.092 [2024-05-12 07:06:32.128132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.128391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.128420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.092 [2024-05-12 07:06:32.128436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.092 [2024-05-12 07:06:32.128679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.092 [2024-05-12 07:06:32.128862] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.092 [2024-05-12 07:06:32.128887] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.092 [2024-05-12 07:06:32.128903] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.092 [2024-05-12 07:06:32.131295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.092 [2024-05-12 07:06:32.140250] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.092 [2024-05-12 07:06:32.140612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.140818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.140849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.092 [2024-05-12 07:06:32.140867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.092 [2024-05-12 07:06:32.141052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.092 [2024-05-12 07:06:32.141204] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.092 [2024-05-12 07:06:32.141228] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.092 [2024-05-12 07:06:32.141244] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.092 [2024-05-12 07:06:32.143675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.092 [2024-05-12 07:06:32.152860] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.092 [2024-05-12 07:06:32.153218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.153478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.153503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.092 [2024-05-12 07:06:32.153519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.092 [2024-05-12 07:06:32.153660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.092 [2024-05-12 07:06:32.153857] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.092 [2024-05-12 07:06:32.153882] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.092 [2024-05-12 07:06:32.153898] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.092 [2024-05-12 07:06:32.156056] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.092 [2024-05-12 07:06:32.165338] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.092 [2024-05-12 07:06:32.165768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.165954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.165980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.092 [2024-05-12 07:06:32.166001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.092 [2024-05-12 07:06:32.166207] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.092 [2024-05-12 07:06:32.166378] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.092 [2024-05-12 07:06:32.166402] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.092 [2024-05-12 07:06:32.166418] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.092 [2024-05-12 07:06:32.168858] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.092 [2024-05-12 07:06:32.177861] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.092 [2024-05-12 07:06:32.178239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.178460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.178501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.092 [2024-05-12 07:06:32.178518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.092 [2024-05-12 07:06:32.178679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.092 [2024-05-12 07:06:32.178871] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.092 [2024-05-12 07:06:32.178897] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.092 [2024-05-12 07:06:32.178913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.092 [2024-05-12 07:06:32.181254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.092 [2024-05-12 07:06:32.190588] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.092 [2024-05-12 07:06:32.190948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.191305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.191357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.092 [2024-05-12 07:06:32.191375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.092 [2024-05-12 07:06:32.191595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.092 [2024-05-12 07:06:32.191796] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.092 [2024-05-12 07:06:32.191821] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.092 [2024-05-12 07:06:32.191838] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.092 [2024-05-12 07:06:32.194230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.092 [2024-05-12 07:06:32.203036] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.092 [2024-05-12 07:06:32.203386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.203736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.203765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.092 [2024-05-12 07:06:32.203783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.092 [2024-05-12 07:06:32.203954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.092 [2024-05-12 07:06:32.204125] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.092 [2024-05-12 07:06:32.204149] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.092 [2024-05-12 07:06:32.204165] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.092 [2024-05-12 07:06:32.206433] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.092 [2024-05-12 07:06:32.215395] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.092 [2024-05-12 07:06:32.215820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.216057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-12 07:06:32.216085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.092 [2024-05-12 07:06:32.216104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.092 [2024-05-12 07:06:32.216270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.092 [2024-05-12 07:06:32.216440] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.092 [2024-05-12 07:06:32.216464] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.092 [2024-05-12 07:06:32.216481] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.357 [2024-05-12 07:06:32.218771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.357 [2024-05-12 07:06:32.228122] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.357 [2024-05-12 07:06:32.228595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.357 [2024-05-12 07:06:32.228807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.357 [2024-05-12 07:06:32.228838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.357 [2024-05-12 07:06:32.228856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.358 [2024-05-12 07:06:32.229058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.358 [2024-05-12 07:06:32.229229] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.358 [2024-05-12 07:06:32.229253] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.358 [2024-05-12 07:06:32.229269] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.358 [2024-05-12 07:06:32.231407] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.358 [2024-05-12 07:06:32.240748] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.358 [2024-05-12 07:06:32.241116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.358 [2024-05-12 07:06:32.241338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.358 [2024-05-12 07:06:32.241367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.358 [2024-05-12 07:06:32.241385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.358 [2024-05-12 07:06:32.241550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.358 [2024-05-12 07:06:32.241719] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.358 [2024-05-12 07:06:32.241745] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.358 [2024-05-12 07:06:32.241761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.358 [2024-05-12 07:06:32.244080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.358 [2024-05-12 07:06:32.253273] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.358 [2024-05-12 07:06:32.253713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.358 [2024-05-12 07:06:32.253939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.358 [2024-05-12 07:06:32.253968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.358 [2024-05-12 07:06:32.253986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.358 [2024-05-12 07:06:32.254097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.358 [2024-05-12 07:06:32.254303] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.358 [2024-05-12 07:06:32.254328] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.358 [2024-05-12 07:06:32.254344] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.358 [2024-05-12 07:06:32.256874] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.358 [2024-05-12 07:06:32.265878] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.358 [2024-05-12 07:06:32.266293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.358 [2024-05-12 07:06:32.266618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.358 [2024-05-12 07:06:32.266679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.358 [2024-05-12 07:06:32.266706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.358 [2024-05-12 07:06:32.266876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.358 [2024-05-12 07:06:32.267027] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.358 [2024-05-12 07:06:32.267052] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.358 [2024-05-12 07:06:32.267069] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.358 [2024-05-12 07:06:32.269242] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.358 [2024-05-12 07:06:32.278577] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.358 [2024-05-12 07:06:32.278953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.358 [2024-05-12 07:06:32.279174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.358 [2024-05-12 07:06:32.279199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.358 [2024-05-12 07:06:32.279215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.358 [2024-05-12 07:06:32.279431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.358 [2024-05-12 07:06:32.279621] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.358 [2024-05-12 07:06:32.279645] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.358 [2024-05-12 07:06:32.279667] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.358 [2024-05-12 07:06:32.281997] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.358 [2024-05-12 07:06:32.291259] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.358 [2024-05-12 07:06:32.291592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.358 [2024-05-12 07:06:32.291813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.358 [2024-05-12 07:06:32.291843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.358 [2024-05-12 07:06:32.291861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.358 [2024-05-12 07:06:32.292045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.358 [2024-05-12 07:06:32.292197] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.358 [2024-05-12 07:06:32.292222] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.358 [2024-05-12 07:06:32.292238] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.358 [2024-05-12 07:06:32.294664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.358 [2024-05-12 07:06:32.303871] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.358 [2024-05-12 07:06:32.304236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.358 [2024-05-12 07:06:32.304395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.358 [2024-05-12 07:06:32.304423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.358 [2024-05-12 07:06:32.304441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.358 [2024-05-12 07:06:32.304588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.358 [2024-05-12 07:06:32.304733] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.358 [2024-05-12 07:06:32.304758] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.359 [2024-05-12 07:06:32.304775] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.359 [2024-05-12 07:06:32.306948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.359 [2024-05-12 07:06:32.316441] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.359 [2024-05-12 07:06:32.316810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.359 [2024-05-12 07:06:32.317013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.359 [2024-05-12 07:06:32.317042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.359 [2024-05-12 07:06:32.317060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.359 [2024-05-12 07:06:32.317262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.359 [2024-05-12 07:06:32.317433] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.359 [2024-05-12 07:06:32.317457] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.359 [2024-05-12 07:06:32.317474] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.359 [2024-05-12 07:06:32.319902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.359 [2024-05-12 07:06:32.328995] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.359 [2024-05-12 07:06:32.329357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.359 [2024-05-12 07:06:32.329587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.359 [2024-05-12 07:06:32.329615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.359 [2024-05-12 07:06:32.329633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.359 [2024-05-12 07:06:32.329755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.359 [2024-05-12 07:06:32.329943] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.359 [2024-05-12 07:06:32.329969] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.359 [2024-05-12 07:06:32.329985] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.359 [2024-05-12 07:06:32.332501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.359 [2024-05-12 07:06:32.341710] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.359 [2024-05-12 07:06:32.342151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.359 [2024-05-12 07:06:32.342377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.359 [2024-05-12 07:06:32.342406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.359 [2024-05-12 07:06:32.342424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.359 [2024-05-12 07:06:32.342590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.359 [2024-05-12 07:06:32.342752] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.359 [2024-05-12 07:06:32.342776] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.359 [2024-05-12 07:06:32.342792] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.359 [2024-05-12 07:06:32.345097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.359 [2024-05-12 07:06:32.353966] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.359 [2024-05-12 07:06:32.354326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.359 [2024-05-12 07:06:32.354616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.359 [2024-05-12 07:06:32.354641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.359 [2024-05-12 07:06:32.354671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.359 [2024-05-12 07:06:32.354840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.359 [2024-05-12 07:06:32.354967] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.359 [2024-05-12 07:06:32.354987] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.359 [2024-05-12 07:06:32.355016] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.359 [2024-05-12 07:06:32.357316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.359 [2024-05-12 07:06:32.366528] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.359 [2024-05-12 07:06:32.366939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.359 [2024-05-12 07:06:32.367228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.359 [2024-05-12 07:06:32.367257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.359 [2024-05-12 07:06:32.367275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.359 [2024-05-12 07:06:32.367459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.359 [2024-05-12 07:06:32.367628] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.359 [2024-05-12 07:06:32.367652] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.359 [2024-05-12 07:06:32.367669] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.359 [2024-05-12 07:06:32.370122] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.359 [2024-05-12 07:06:32.378945] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.359 [2024-05-12 07:06:32.379505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.359 [2024-05-12 07:06:32.379775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.359 [2024-05-12 07:06:32.379803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.359 [2024-05-12 07:06:32.379820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.359 [2024-05-12 07:06:32.379953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.359 [2024-05-12 07:06:32.380136] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.360 [2024-05-12 07:06:32.380161] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.360 [2024-05-12 07:06:32.380178] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.360 [2024-05-12 07:06:32.382536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.360 [2024-05-12 07:06:32.391445] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.360 [2024-05-12 07:06:32.391850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.360 [2024-05-12 07:06:32.392066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.360 [2024-05-12 07:06:32.392092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.360 [2024-05-12 07:06:32.392108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.360 [2024-05-12 07:06:32.392233] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.360 [2024-05-12 07:06:32.392416] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.360 [2024-05-12 07:06:32.392442] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.360 [2024-05-12 07:06:32.392458] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.360 [2024-05-12 07:06:32.394553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.360 [2024-05-12 07:06:32.404192] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.360 [2024-05-12 07:06:32.404724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.360 [2024-05-12 07:06:32.404889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.360 [2024-05-12 07:06:32.404915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.360 [2024-05-12 07:06:32.404931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.360 [2024-05-12 07:06:32.405135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.360 [2024-05-12 07:06:32.405295] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.360 [2024-05-12 07:06:32.405315] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.360 [2024-05-12 07:06:32.405329] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.360 [2024-05-12 07:06:32.407723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.360 [2024-05-12 07:06:32.416862] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.360 [2024-05-12 07:06:32.417294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.360 [2024-05-12 07:06:32.417520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.360 [2024-05-12 07:06:32.417570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.360 [2024-05-12 07:06:32.417588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.360 [2024-05-12 07:06:32.417812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.360 [2024-05-12 07:06:32.417998] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.360 [2024-05-12 07:06:32.418019] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.360 [2024-05-12 07:06:32.418033] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.360 [2024-05-12 07:06:32.420239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.360 [2024-05-12 07:06:32.429525] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.360 [2024-05-12 07:06:32.429859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.360 [2024-05-12 07:06:32.430014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.360 [2024-05-12 07:06:32.430040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.360 [2024-05-12 07:06:32.430071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.360 [2024-05-12 07:06:32.430231] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.360 [2024-05-12 07:06:32.430433] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.360 [2024-05-12 07:06:32.430458] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.360 [2024-05-12 07:06:32.430474] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.360 [2024-05-12 07:06:32.432830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.360 [2024-05-12 07:06:32.442024] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.360 [2024-05-12 07:06:32.442481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.360 [2024-05-12 07:06:32.442638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.360 [2024-05-12 07:06:32.442669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.360 [2024-05-12 07:06:32.442704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.360 [2024-05-12 07:06:32.442840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.360 [2024-05-12 07:06:32.442961] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.360 [2024-05-12 07:06:32.442999] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.360 [2024-05-12 07:06:32.443019] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.360 [2024-05-12 07:06:32.445228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.360 [2024-05-12 07:06:32.454490] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.360 [2024-05-12 07:06:32.454870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.360 [2024-05-12 07:06:32.455049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.360 [2024-05-12 07:06:32.455090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.360 [2024-05-12 07:06:32.455106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.360 [2024-05-12 07:06:32.455266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.360 [2024-05-12 07:06:32.455465] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.361 [2024-05-12 07:06:32.455490] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.361 [2024-05-12 07:06:32.455506] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.361 [2024-05-12 07:06:32.457918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.361 [2024-05-12 07:06:32.467038] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.361 [2024-05-12 07:06:32.467390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.361 [2024-05-12 07:06:32.467597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.361 [2024-05-12 07:06:32.467626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.361 [2024-05-12 07:06:32.467644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.361 [2024-05-12 07:06:32.467830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.361 [2024-05-12 07:06:32.468019] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.361 [2024-05-12 07:06:32.468056] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.361 [2024-05-12 07:06:32.468072] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.361 [2024-05-12 07:06:32.470378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.361 [2024-05-12 07:06:32.479755] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.361 [2024-05-12 07:06:32.480088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.361 [2024-05-12 07:06:32.480382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.361 [2024-05-12 07:06:32.480430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.361 [2024-05-12 07:06:32.480465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.361 [2024-05-12 07:06:32.480595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.361 [2024-05-12 07:06:32.480794] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.361 [2024-05-12 07:06:32.480817] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.361 [2024-05-12 07:06:32.480831] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.622 [2024-05-12 07:06:32.483192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.622 [2024-05-12 07:06:32.492409] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.622 [2024-05-12 07:06:32.492826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.493027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.493053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.622 [2024-05-12 07:06:32.493070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.622 [2024-05-12 07:06:32.493250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.622 [2024-05-12 07:06:32.493420] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.622 [2024-05-12 07:06:32.493445] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.622 [2024-05-12 07:06:32.493461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.622 [2024-05-12 07:06:32.495712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.622 [2024-05-12 07:06:32.504920] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.622 [2024-05-12 07:06:32.505343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.505544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.505573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.622 [2024-05-12 07:06:32.505591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.622 [2024-05-12 07:06:32.505760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.622 [2024-05-12 07:06:32.505936] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.622 [2024-05-12 07:06:32.505957] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.622 [2024-05-12 07:06:32.505971] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.622 [2024-05-12 07:06:32.508100] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.622 [2024-05-12 07:06:32.517584] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.622 [2024-05-12 07:06:32.517958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.518195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.518242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.622 [2024-05-12 07:06:32.518261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.622 [2024-05-12 07:06:32.518437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.622 [2024-05-12 07:06:32.518590] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.622 [2024-05-12 07:06:32.518614] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.622 [2024-05-12 07:06:32.518630] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.622 [2024-05-12 07:06:32.521054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.622 [2024-05-12 07:06:32.529935] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.622 [2024-05-12 07:06:32.530322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.530529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.530558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.622 [2024-05-12 07:06:32.530577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.622 [2024-05-12 07:06:32.530772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.622 [2024-05-12 07:06:32.530980] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.622 [2024-05-12 07:06:32.531004] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.622 [2024-05-12 07:06:32.531020] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.622 [2024-05-12 07:06:32.533448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.622 [2024-05-12 07:06:32.542420] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.622 [2024-05-12 07:06:32.542817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.542997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.543025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.622 [2024-05-12 07:06:32.543043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.622 [2024-05-12 07:06:32.543209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.622 [2024-05-12 07:06:32.543362] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.622 [2024-05-12 07:06:32.543386] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.622 [2024-05-12 07:06:32.543402] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.622 [2024-05-12 07:06:32.545730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.622 [2024-05-12 07:06:32.554941] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.622 [2024-05-12 07:06:32.555287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.555484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.555510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.622 [2024-05-12 07:06:32.555527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.622 [2024-05-12 07:06:32.555683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.622 [2024-05-12 07:06:32.555872] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.622 [2024-05-12 07:06:32.555897] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.622 [2024-05-12 07:06:32.555913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.622 [2024-05-12 07:06:32.558450] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.622 [2024-05-12 07:06:32.567499] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.622 [2024-05-12 07:06:32.567911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.568091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.568120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.622 [2024-05-12 07:06:32.568138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.622 [2024-05-12 07:06:32.568340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.622 [2024-05-12 07:06:32.568529] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.622 [2024-05-12 07:06:32.568553] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.622 [2024-05-12 07:06:32.568569] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.622 [2024-05-12 07:06:32.570916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.622 [2024-05-12 07:06:32.580170] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.622 [2024-05-12 07:06:32.580604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.580847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.622 [2024-05-12 07:06:32.580875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.622 [2024-05-12 07:06:32.580891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.622 [2024-05-12 07:06:32.581020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.622 [2024-05-12 07:06:32.581197] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.623 [2024-05-12 07:06:32.581221] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.623 [2024-05-12 07:06:32.581237] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.623 [2024-05-12 07:06:32.583338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.623 [2024-05-12 07:06:32.592736] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.623 [2024-05-12 07:06:32.593219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.593552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.593598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.623 [2024-05-12 07:06:32.593617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.623 [2024-05-12 07:06:32.593774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.623 [2024-05-12 07:06:32.593963] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.623 [2024-05-12 07:06:32.594003] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.623 [2024-05-12 07:06:32.594020] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.623 [2024-05-12 07:06:32.596396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.623 [2024-05-12 07:06:32.605432] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.623 [2024-05-12 07:06:32.605819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.606061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.606089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.623 [2024-05-12 07:06:32.606107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.623 [2024-05-12 07:06:32.606256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.623 [2024-05-12 07:06:32.606426] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.623 [2024-05-12 07:06:32.606450] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.623 [2024-05-12 07:06:32.606466] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.623 [2024-05-12 07:06:32.608756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.623 [2024-05-12 07:06:32.618033] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.623 [2024-05-12 07:06:32.618507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.618718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.618748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.623 [2024-05-12 07:06:32.618766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.623 [2024-05-12 07:06:32.618949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.623 [2024-05-12 07:06:32.619119] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.623 [2024-05-12 07:06:32.619144] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.623 [2024-05-12 07:06:32.619160] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.623 [2024-05-12 07:06:32.621687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.623 [2024-05-12 07:06:32.630357] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.623 [2024-05-12 07:06:32.630723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.630948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.630988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.623 [2024-05-12 07:06:32.631006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.623 [2024-05-12 07:06:32.631172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.623 [2024-05-12 07:06:32.631323] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.623 [2024-05-12 07:06:32.631347] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.623 [2024-05-12 07:06:32.631368] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.623 [2024-05-12 07:06:32.633877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.623 [2024-05-12 07:06:32.642875] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.623 [2024-05-12 07:06:32.643251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.643468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.643514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.623 [2024-05-12 07:06:32.643532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.623 [2024-05-12 07:06:32.643680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.623 [2024-05-12 07:06:32.643860] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.623 [2024-05-12 07:06:32.643884] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.623 [2024-05-12 07:06:32.643901] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.623 [2024-05-12 07:06:32.646229] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.623 [2024-05-12 07:06:32.655556] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.623 [2024-05-12 07:06:32.655949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.656331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.656390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.623 [2024-05-12 07:06:32.656408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.623 [2024-05-12 07:06:32.656583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.623 [2024-05-12 07:06:32.656763] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.623 [2024-05-12 07:06:32.656788] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.623 [2024-05-12 07:06:32.656804] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.623 [2024-05-12 07:06:32.659214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.623 [2024-05-12 07:06:32.668076] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.623 [2024-05-12 07:06:32.668617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.668849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.668879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.623 [2024-05-12 07:06:32.668897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.623 [2024-05-12 07:06:32.669063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.623 [2024-05-12 07:06:32.669178] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.623 [2024-05-12 07:06:32.669202] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.623 [2024-05-12 07:06:32.669219] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.623 [2024-05-12 07:06:32.671635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.623 [2024-05-12 07:06:32.680866] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.623 [2024-05-12 07:06:32.681311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.681641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.681702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.623 [2024-05-12 07:06:32.681723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.623 [2024-05-12 07:06:32.681852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.623 [2024-05-12 07:06:32.682022] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.623 [2024-05-12 07:06:32.682047] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.623 [2024-05-12 07:06:32.682063] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.623 [2024-05-12 07:06:32.684326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.623 [2024-05-12 07:06:32.693422] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.623 [2024-05-12 07:06:32.693839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.694054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.623 [2024-05-12 07:06:32.694079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.623 [2024-05-12 07:06:32.694095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.623 [2024-05-12 07:06:32.694207] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.623 [2024-05-12 07:06:32.694403] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.623 [2024-05-12 07:06:32.694428] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.623 [2024-05-12 07:06:32.694444] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.624 [2024-05-12 07:06:32.696671] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.624 [2024-05-12 07:06:32.706037] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.624 [2024-05-12 07:06:32.706486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.624 [2024-05-12 07:06:32.706755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.624 [2024-05-12 07:06:32.706783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.624 [2024-05-12 07:06:32.706799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.624 [2024-05-12 07:06:32.707022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.624 [2024-05-12 07:06:32.707228] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.624 [2024-05-12 07:06:32.707252] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.624 [2024-05-12 07:06:32.707269] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.624 [2024-05-12 07:06:32.709769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.624 [2024-05-12 07:06:32.718438] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.624 [2024-05-12 07:06:32.718820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.624 [2024-05-12 07:06:32.719038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.624 [2024-05-12 07:06:32.719066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.624 [2024-05-12 07:06:32.719084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.624 [2024-05-12 07:06:32.719286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.624 [2024-05-12 07:06:32.719420] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.624 [2024-05-12 07:06:32.719443] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.624 [2024-05-12 07:06:32.719460] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.624 [2024-05-12 07:06:32.721468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.624 [2024-05-12 07:06:32.731034] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.624 [2024-05-12 07:06:32.731378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.624 [2024-05-12 07:06:32.731620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.624 [2024-05-12 07:06:32.731645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.624 [2024-05-12 07:06:32.731661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.624 [2024-05-12 07:06:32.731859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.624 [2024-05-12 07:06:32.732012] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.624 [2024-05-12 07:06:32.732036] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.624 [2024-05-12 07:06:32.732053] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.624 [2024-05-12 07:06:32.734498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.624 [2024-05-12 07:06:32.743654] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.624 [2024-05-12 07:06:32.744056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.624 [2024-05-12 07:06:32.744233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.624 [2024-05-12 07:06:32.744276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.624 [2024-05-12 07:06:32.744294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.624 [2024-05-12 07:06:32.744478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.624 [2024-05-12 07:06:32.744648] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.624 [2024-05-12 07:06:32.744672] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.624 [2024-05-12 07:06:32.744690] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.624 [2024-05-12 07:06:32.747007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.885 [2024-05-12 07:06:32.756197] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.885 [2024-05-12 07:06:32.756655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.756833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.756860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.885 [2024-05-12 07:06:32.756877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.885 [2024-05-12 07:06:32.757024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.885 [2024-05-12 07:06:32.757196] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.885 [2024-05-12 07:06:32.757217] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.885 [2024-05-12 07:06:32.757230] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.885 [2024-05-12 07:06:32.759537] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.885 [2024-05-12 07:06:32.768893] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.885 [2024-05-12 07:06:32.769282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.769558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.769601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.885 [2024-05-12 07:06:32.769619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.885 [2024-05-12 07:06:32.769772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.885 [2024-05-12 07:06:32.769908] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.885 [2024-05-12 07:06:32.769930] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.885 [2024-05-12 07:06:32.769944] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.885 [2024-05-12 07:06:32.772318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.885 [2024-05-12 07:06:32.781498] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.885 [2024-05-12 07:06:32.781921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.782140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.782165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.885 [2024-05-12 07:06:32.782180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.885 [2024-05-12 07:06:32.782337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.885 [2024-05-12 07:06:32.782534] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.885 [2024-05-12 07:06:32.782559] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.885 [2024-05-12 07:06:32.782575] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.885 [2024-05-12 07:06:32.784958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.885 [2024-05-12 07:06:32.794089] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.885 [2024-05-12 07:06:32.794454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.794660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.794694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.885 [2024-05-12 07:06:32.794724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.885 [2024-05-12 07:06:32.794873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.885 [2024-05-12 07:06:32.795006] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.885 [2024-05-12 07:06:32.795030] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.885 [2024-05-12 07:06:32.795046] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.885 [2024-05-12 07:06:32.797313] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.885 [2024-05-12 07:06:32.806664] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.885 [2024-05-12 07:06:32.807109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.807317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.807369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.885 [2024-05-12 07:06:32.807388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.885 [2024-05-12 07:06:32.807572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.885 [2024-05-12 07:06:32.807736] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.885 [2024-05-12 07:06:32.807761] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.885 [2024-05-12 07:06:32.807777] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.885 [2024-05-12 07:06:32.810149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.885 [2024-05-12 07:06:32.819006] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.885 [2024-05-12 07:06:32.819452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.819620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.819645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.885 [2024-05-12 07:06:32.819661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.885 [2024-05-12 07:06:32.819814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.885 [2024-05-12 07:06:32.820007] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.885 [2024-05-12 07:06:32.820031] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.885 [2024-05-12 07:06:32.820048] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.885 [2024-05-12 07:06:32.822531] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.885 [2024-05-12 07:06:32.831513] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.885 [2024-05-12 07:06:32.831932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.832164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.832190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.885 [2024-05-12 07:06:32.832226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.885 [2024-05-12 07:06:32.832411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.885 [2024-05-12 07:06:32.832574] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.885 [2024-05-12 07:06:32.832598] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.885 [2024-05-12 07:06:32.832614] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.885 [2024-05-12 07:06:32.834889] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.885 [2024-05-12 07:06:32.843958] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.885 [2024-05-12 07:06:32.844396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.844612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.844638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.885 [2024-05-12 07:06:32.844654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.885 [2024-05-12 07:06:32.844812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.885 [2024-05-12 07:06:32.844983] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.885 [2024-05-12 07:06:32.845007] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.885 [2024-05-12 07:06:32.845023] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.885 [2024-05-12 07:06:32.847469] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.885 [2024-05-12 07:06:32.856497] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.885 [2024-05-12 07:06:32.856927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.857155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-05-12 07:06:32.857184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.886 [2024-05-12 07:06:32.857202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.886 [2024-05-12 07:06:32.857369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.886 [2024-05-12 07:06:32.857520] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.886 [2024-05-12 07:06:32.857546] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.886 [2024-05-12 07:06:32.857562] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.886 [2024-05-12 07:06:32.859896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.886 [2024-05-12 07:06:32.868953] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.886 [2024-05-12 07:06:32.869337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.869535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.869565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.886 [2024-05-12 07:06:32.869583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.886 [2024-05-12 07:06:32.869805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.886 [2024-05-12 07:06:32.869994] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.886 [2024-05-12 07:06:32.870022] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.886 [2024-05-12 07:06:32.870038] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.886 [2024-05-12 07:06:32.872463] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.886 [2024-05-12 07:06:32.881379] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.886 [2024-05-12 07:06:32.881777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.882012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.882042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.886 [2024-05-12 07:06:32.882060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.886 [2024-05-12 07:06:32.882227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.886 [2024-05-12 07:06:32.882397] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.886 [2024-05-12 07:06:32.882422] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.886 [2024-05-12 07:06:32.882438] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.886 [2024-05-12 07:06:32.884718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.886 [2024-05-12 07:06:32.893829] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.886 [2024-05-12 07:06:32.894180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.894436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.894466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.886 [2024-05-12 07:06:32.894484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.886 [2024-05-12 07:06:32.894632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.886 [2024-05-12 07:06:32.894815] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.886 [2024-05-12 07:06:32.894842] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.886 [2024-05-12 07:06:32.894859] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.886 [2024-05-12 07:06:32.897325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.886 [2024-05-12 07:06:32.906541] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.886 [2024-05-12 07:06:32.906882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.907179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.907208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.886 [2024-05-12 07:06:32.907227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.886 [2024-05-12 07:06:32.907394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.886 [2024-05-12 07:06:32.907569] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.886 [2024-05-12 07:06:32.907594] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.886 [2024-05-12 07:06:32.907611] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.886 [2024-05-12 07:06:32.910106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.886 [2024-05-12 07:06:32.919107] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.886 [2024-05-12 07:06:32.919528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.919768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.919811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.886 [2024-05-12 07:06:32.919829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.886 [2024-05-12 07:06:32.920073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.886 [2024-05-12 07:06:32.920262] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.886 [2024-05-12 07:06:32.920287] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.886 [2024-05-12 07:06:32.920304] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.886 [2024-05-12 07:06:32.922705] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.886 [2024-05-12 07:06:32.931780] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.886 [2024-05-12 07:06:32.932112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.932340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.932366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.886 [2024-05-12 07:06:32.932382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.886 [2024-05-12 07:06:32.932604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.886 [2024-05-12 07:06:32.932844] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.886 [2024-05-12 07:06:32.932873] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.886 [2024-05-12 07:06:32.932890] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.886 [2024-05-12 07:06:32.935050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.886 [2024-05-12 07:06:32.944367] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.886 [2024-05-12 07:06:32.944713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.944920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.944951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.886 [2024-05-12 07:06:32.944969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.886 [2024-05-12 07:06:32.945101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.886 [2024-05-12 07:06:32.945270] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.886 [2024-05-12 07:06:32.945296] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.886 [2024-05-12 07:06:32.945318] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.886 [2024-05-12 07:06:32.947611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.886 [2024-05-12 07:06:32.956943] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.886 [2024-05-12 07:06:32.957335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.957690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.957752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.886 [2024-05-12 07:06:32.957772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.886 [2024-05-12 07:06:32.957938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.886 [2024-05-12 07:06:32.958089] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.886 [2024-05-12 07:06:32.958113] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.886 [2024-05-12 07:06:32.958129] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.886 [2024-05-12 07:06:32.960578] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.886 [2024-05-12 07:06:32.969508] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.886 [2024-05-12 07:06:32.969863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.970069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-05-12 07:06:32.970094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.886 [2024-05-12 07:06:32.970110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.886 [2024-05-12 07:06:32.970286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.886 [2024-05-12 07:06:32.970420] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.886 [2024-05-12 07:06:32.970456] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.886 [2024-05-12 07:06:32.970472] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.886 [2024-05-12 07:06:32.972929] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.887 [2024-05-12 07:06:32.982039] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.887 [2024-05-12 07:06:32.982388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-05-12 07:06:32.982690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-05-12 07:06:32.982780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.887 [2024-05-12 07:06:32.982799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.887 [2024-05-12 07:06:32.982965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.887 [2024-05-12 07:06:32.983152] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.887 [2024-05-12 07:06:32.983176] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.887 [2024-05-12 07:06:32.983197] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.887 [2024-05-12 07:06:32.985684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.887 [2024-05-12 07:06:32.994747] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.887 [2024-05-12 07:06:32.995167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-05-12 07:06:32.995513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-05-12 07:06:32.995569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.887 [2024-05-12 07:06:32.995587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.887 [2024-05-12 07:06:32.995787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.887 [2024-05-12 07:06:32.995976] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.887 [2024-05-12 07:06:32.996001] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.887 [2024-05-12 07:06:32.996018] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.887 [2024-05-12 07:06:32.998377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.887 [2024-05-12 07:06:33.007463] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.887 [2024-05-12 07:06:33.007831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-05-12 07:06:33.008068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-05-12 07:06:33.008098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:25.887 [2024-05-12 07:06:33.008116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:25.887 [2024-05-12 07:06:33.008282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:25.887 [2024-05-12 07:06:33.008452] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.887 [2024-05-12 07:06:33.008477] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.887 [2024-05-12 07:06:33.008494] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.887 [2024-05-12 07:06:33.010888] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.148 [2024-05-12 07:06:33.019842] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.148 [2024-05-12 07:06:33.020183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.148 [2024-05-12 07:06:33.020466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.148 [2024-05-12 07:06:33.020492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.148 [2024-05-12 07:06:33.020524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.148 [2024-05-12 07:06:33.020692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.148 [2024-05-12 07:06:33.020883] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.148 [2024-05-12 07:06:33.020905] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.148 [2024-05-12 07:06:33.020919] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.148 [2024-05-12 07:06:33.023448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.148 [2024-05-12 07:06:33.032338] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.148 [2024-05-12 07:06:33.032818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.148 [2024-05-12 07:06:33.033047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.148 [2024-05-12 07:06:33.033077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.148 [2024-05-12 07:06:33.033095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.148 [2024-05-12 07:06:33.033208] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.148 [2024-05-12 07:06:33.033378] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.148 [2024-05-12 07:06:33.033403] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.148 [2024-05-12 07:06:33.033421] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.148 [2024-05-12 07:06:33.035602] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.148 [2024-05-12 07:06:33.044937] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.148 [2024-05-12 07:06:33.045342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.148 [2024-05-12 07:06:33.045646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.148 [2024-05-12 07:06:33.045676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.148 [2024-05-12 07:06:33.045706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.148 [2024-05-12 07:06:33.045912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.148 [2024-05-12 07:06:33.046046] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.148 [2024-05-12 07:06:33.046071] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.148 [2024-05-12 07:06:33.046088] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.148 [2024-05-12 07:06:33.048520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.148 [2024-05-12 07:06:33.057418] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.148 [2024-05-12 07:06:33.057840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.148 [2024-05-12 07:06:33.058021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.148 [2024-05-12 07:06:33.058047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.148 [2024-05-12 07:06:33.058063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.148 [2024-05-12 07:06:33.058234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.148 [2024-05-12 07:06:33.058421] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.148 [2024-05-12 07:06:33.058447] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.148 [2024-05-12 07:06:33.058463] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.148 [2024-05-12 07:06:33.060886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.148 [2024-05-12 07:06:33.070190] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.148 [2024-05-12 07:06:33.070552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.148 [2024-05-12 07:06:33.070717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.148 [2024-05-12 07:06:33.070743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.148 [2024-05-12 07:06:33.070759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.148 [2024-05-12 07:06:33.070918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.148 [2024-05-12 07:06:33.071088] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.148 [2024-05-12 07:06:33.071112] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.148 [2024-05-12 07:06:33.071128] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.148 [2024-05-12 07:06:33.073540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.148 [2024-05-12 07:06:33.082616] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.148 [2024-05-12 07:06:33.082956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.148 [2024-05-12 07:06:33.083119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.148 [2024-05-12 07:06:33.083143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.148 [2024-05-12 07:06:33.083158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.148 [2024-05-12 07:06:33.083332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.148 [2024-05-12 07:06:33.083485] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.148 [2024-05-12 07:06:33.083508] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.148 [2024-05-12 07:06:33.083525] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.148 [2024-05-12 07:06:33.085653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.148 [2024-05-12 07:06:33.095331] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.148 [2024-05-12 07:06:33.095710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.095986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.096041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.149 [2024-05-12 07:06:33.096059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.149 [2024-05-12 07:06:33.096243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.149 [2024-05-12 07:06:33.096395] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.149 [2024-05-12 07:06:33.096421] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.149 [2024-05-12 07:06:33.096437] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.149 [2024-05-12 07:06:33.098653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.149 [2024-05-12 07:06:33.108024] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.149 [2024-05-12 07:06:33.108451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.108662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.108692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.149 [2024-05-12 07:06:33.108726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.149 [2024-05-12 07:06:33.108913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.149 [2024-05-12 07:06:33.109064] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.149 [2024-05-12 07:06:33.109090] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.149 [2024-05-12 07:06:33.109106] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.149 [2024-05-12 07:06:33.111321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.149 [2024-05-12 07:06:33.120574] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.149 [2024-05-12 07:06:33.120946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.121216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.121243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.149 [2024-05-12 07:06:33.121259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.149 [2024-05-12 07:06:33.121449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.149 [2024-05-12 07:06:33.121620] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.149 [2024-05-12 07:06:33.121645] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.149 [2024-05-12 07:06:33.121662] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.149 [2024-05-12 07:06:33.124016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.149 [2024-05-12 07:06:33.133184] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.149 [2024-05-12 07:06:33.133588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.133792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.133822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.149 [2024-05-12 07:06:33.133840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.149 [2024-05-12 07:06:33.133988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.149 [2024-05-12 07:06:33.134157] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.149 [2024-05-12 07:06:33.134183] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.149 [2024-05-12 07:06:33.134200] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.149 [2024-05-12 07:06:33.136650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.149 [2024-05-12 07:06:33.145739] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.149 [2024-05-12 07:06:33.146093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.146298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.146326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.149 [2024-05-12 07:06:33.146349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.149 [2024-05-12 07:06:33.146534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.149 [2024-05-12 07:06:33.146756] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.149 [2024-05-12 07:06:33.146782] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.149 [2024-05-12 07:06:33.146798] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.149 [2024-05-12 07:06:33.149047] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.149 [2024-05-12 07:06:33.158344] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.149 [2024-05-12 07:06:33.158726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.158902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.158930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.149 [2024-05-12 07:06:33.158948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.149 [2024-05-12 07:06:33.159114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.149 [2024-05-12 07:06:33.159303] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.149 [2024-05-12 07:06:33.159329] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.149 [2024-05-12 07:06:33.159345] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.149 [2024-05-12 07:06:33.161827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.149 [2024-05-12 07:06:33.171062] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.149 [2024-05-12 07:06:33.171398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.171625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.171653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.149 [2024-05-12 07:06:33.171670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.149 [2024-05-12 07:06:33.171814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.149 [2024-05-12 07:06:33.172003] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.149 [2024-05-12 07:06:33.172028] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.149 [2024-05-12 07:06:33.172045] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.149 [2024-05-12 07:06:33.174238] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.149 [2024-05-12 07:06:33.183565] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.149 [2024-05-12 07:06:33.183934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.184135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.184165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.149 [2024-05-12 07:06:33.184183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.149 [2024-05-12 07:06:33.184355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.149 [2024-05-12 07:06:33.184561] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.149 [2024-05-12 07:06:33.184586] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.149 [2024-05-12 07:06:33.184602] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.149 [2024-05-12 07:06:33.186890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.149 [2024-05-12 07:06:33.196235] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.149 [2024-05-12 07:06:33.196636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.196886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.149 [2024-05-12 07:06:33.196916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.149 [2024-05-12 07:06:33.196934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.150 [2024-05-12 07:06:33.197101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.150 [2024-05-12 07:06:33.197288] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.150 [2024-05-12 07:06:33.197314] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.150 [2024-05-12 07:06:33.197331] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.150 [2024-05-12 07:06:33.199788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.150 [2024-05-12 07:06:33.208886] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.150 [2024-05-12 07:06:33.209291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.150 [2024-05-12 07:06:33.209505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.150 [2024-05-12 07:06:33.209529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.150 [2024-05-12 07:06:33.209545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.150 [2024-05-12 07:06:33.209774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.150 [2024-05-12 07:06:33.209890] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.150 [2024-05-12 07:06:33.209916] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.150 [2024-05-12 07:06:33.209933] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.150 [2024-05-12 07:06:33.212091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.150 [2024-05-12 07:06:33.221380] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.150 [2024-05-12 07:06:33.221810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.150 [2024-05-12 07:06:33.222026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.150 [2024-05-12 07:06:33.222063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.150 [2024-05-12 07:06:33.222094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.150 [2024-05-12 07:06:33.222251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.150 [2024-05-12 07:06:33.222439] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.150 [2024-05-12 07:06:33.222466] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.150 [2024-05-12 07:06:33.222482] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.150 [2024-05-12 07:06:33.224701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.150 [2024-05-12 07:06:33.234245] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.150 [2024-05-12 07:06:33.234636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.150 [2024-05-12 07:06:33.234837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.150 [2024-05-12 07:06:33.234863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.150 [2024-05-12 07:06:33.234879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.150 [2024-05-12 07:06:33.235012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.150 [2024-05-12 07:06:33.235210] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.150 [2024-05-12 07:06:33.235237] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.150 [2024-05-12 07:06:33.235253] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.150 [2024-05-12 07:06:33.237594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.150 [2024-05-12 07:06:33.246912] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.150 [2024-05-12 07:06:33.247318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.150 [2024-05-12 07:06:33.247553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.150 [2024-05-12 07:06:33.247582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.150 [2024-05-12 07:06:33.247601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.150 [2024-05-12 07:06:33.247782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.150 [2024-05-12 07:06:33.247953] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.150 [2024-05-12 07:06:33.247979] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.150 [2024-05-12 07:06:33.247995] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.150 [2024-05-12 07:06:33.250371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.150 [2024-05-12 07:06:33.259577] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.150 [2024-05-12 07:06:33.259991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.150 [2024-05-12 07:06:33.260210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.150 [2024-05-12 07:06:33.260235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.150 [2024-05-12 07:06:33.260251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.150 [2024-05-12 07:06:33.260415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.150 [2024-05-12 07:06:33.260567] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.150 [2024-05-12 07:06:33.260597] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.150 [2024-05-12 07:06:33.260615] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.150 [2024-05-12 07:06:33.262897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.150 [2024-05-12 07:06:33.272356] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.150 [2024-05-12 07:06:33.272665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.150 [2024-05-12 07:06:33.272893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.150 [2024-05-12 07:06:33.272924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.150 [2024-05-12 07:06:33.272942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.150 [2024-05-12 07:06:33.273125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.150 [2024-05-12 07:06:33.273277] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.150 [2024-05-12 07:06:33.273301] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.150 [2024-05-12 07:06:33.273317] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.150 [2024-05-12 07:06:33.275553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.411 [2024-05-12 07:06:33.285070] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.411 [2024-05-12 07:06:33.285446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.285672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.285720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.411 [2024-05-12 07:06:33.285741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.411 [2024-05-12 07:06:33.285944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.411 [2024-05-12 07:06:33.286150] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.411 [2024-05-12 07:06:33.286176] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.411 [2024-05-12 07:06:33.286192] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.411 [2024-05-12 07:06:33.288427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.411 [2024-05-12 07:06:33.297826] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.411 [2024-05-12 07:06:33.298194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.298489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.298542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.411 [2024-05-12 07:06:33.298560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.411 [2024-05-12 07:06:33.298720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.411 [2024-05-12 07:06:33.298909] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.411 [2024-05-12 07:06:33.298935] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.411 [2024-05-12 07:06:33.298957] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.411 [2024-05-12 07:06:33.301263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.411 [2024-05-12 07:06:33.310398] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.411 [2024-05-12 07:06:33.310827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.311055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.311081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.411 [2024-05-12 07:06:33.311097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.411 [2024-05-12 07:06:33.311283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.411 [2024-05-12 07:06:33.311453] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.411 [2024-05-12 07:06:33.311479] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.411 [2024-05-12 07:06:33.311496] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.411 [2024-05-12 07:06:33.314158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.411 [2024-05-12 07:06:33.322781] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.411 [2024-05-12 07:06:33.323184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.323384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.323414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.411 [2024-05-12 07:06:33.323432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.411 [2024-05-12 07:06:33.323636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.411 [2024-05-12 07:06:33.323821] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.411 [2024-05-12 07:06:33.323848] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.411 [2024-05-12 07:06:33.323865] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.411 [2024-05-12 07:06:33.326225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.411 [2024-05-12 07:06:33.335376] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.411 [2024-05-12 07:06:33.335745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.335919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.335947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.411 [2024-05-12 07:06:33.335964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.411 [2024-05-12 07:06:33.336166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.411 [2024-05-12 07:06:33.336336] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.411 [2024-05-12 07:06:33.336361] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.411 [2024-05-12 07:06:33.336378] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.411 [2024-05-12 07:06:33.338931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.411 [2024-05-12 07:06:33.347786] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.411 [2024-05-12 07:06:33.348349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.348560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.348589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.411 [2024-05-12 07:06:33.348606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.411 [2024-05-12 07:06:33.348808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.411 [2024-05-12 07:06:33.349017] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.411 [2024-05-12 07:06:33.349044] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.411 [2024-05-12 07:06:33.349060] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.411 [2024-05-12 07:06:33.351291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.411 [2024-05-12 07:06:33.360416] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.411 [2024-05-12 07:06:33.360831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.361051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.411 [2024-05-12 07:06:33.361078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.411 [2024-05-12 07:06:33.361109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.411 [2024-05-12 07:06:33.361307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.411 [2024-05-12 07:06:33.361461] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.411 [2024-05-12 07:06:33.361487] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.412 [2024-05-12 07:06:33.361503] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.412 [2024-05-12 07:06:33.363918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.412 [2024-05-12 07:06:33.373156] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.412 [2024-05-12 07:06:33.373601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.373836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.373868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.412 [2024-05-12 07:06:33.373886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.412 [2024-05-12 07:06:33.374035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.412 [2024-05-12 07:06:33.374187] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.412 [2024-05-12 07:06:33.374212] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.412 [2024-05-12 07:06:33.374229] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.412 [2024-05-12 07:06:33.376388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.412 [2024-05-12 07:06:33.385847] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.412 [2024-05-12 07:06:33.386252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.386451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.386480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.412 [2024-05-12 07:06:33.386498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.412 [2024-05-12 07:06:33.386664] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.412 [2024-05-12 07:06:33.386868] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.412 [2024-05-12 07:06:33.386895] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.412 [2024-05-12 07:06:33.386911] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.412 [2024-05-12 07:06:33.389233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.412 [2024-05-12 07:06:33.398451] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.412 [2024-05-12 07:06:33.398858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.399062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.399091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.412 [2024-05-12 07:06:33.399109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.412 [2024-05-12 07:06:33.399240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.412 [2024-05-12 07:06:33.399428] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.412 [2024-05-12 07:06:33.399453] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.412 [2024-05-12 07:06:33.399470] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.412 [2024-05-12 07:06:33.401891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.412 [2024-05-12 07:06:33.411004] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.412 [2024-05-12 07:06:33.411423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.411608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.411635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.412 [2024-05-12 07:06:33.411652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.412 [2024-05-12 07:06:33.411828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.412 [2024-05-12 07:06:33.412018] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.412 [2024-05-12 07:06:33.412044] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.412 [2024-05-12 07:06:33.412060] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.412 [2024-05-12 07:06:33.414473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.412 [2024-05-12 07:06:33.423688] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.412 [2024-05-12 07:06:33.424062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.424425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.424484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.412 [2024-05-12 07:06:33.424502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.412 [2024-05-12 07:06:33.424670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.412 [2024-05-12 07:06:33.424798] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.412 [2024-05-12 07:06:33.424823] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.412 [2024-05-12 07:06:33.424840] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.412 [2024-05-12 07:06:33.427055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.412 [2024-05-12 07:06:33.436339] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.412 [2024-05-12 07:06:33.436711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.436916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.436948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.412 [2024-05-12 07:06:33.436967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.412 [2024-05-12 07:06:33.437116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.412 [2024-05-12 07:06:33.437305] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.412 [2024-05-12 07:06:33.437331] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.412 [2024-05-12 07:06:33.437347] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.412 [2024-05-12 07:06:33.439600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.412 [2024-05-12 07:06:33.449012] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.412 [2024-05-12 07:06:33.449391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.449587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.449613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.412 [2024-05-12 07:06:33.449629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.412 [2024-05-12 07:06:33.449840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.412 [2024-05-12 07:06:33.450011] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.412 [2024-05-12 07:06:33.450037] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.412 [2024-05-12 07:06:33.450053] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.412 [2024-05-12 07:06:33.452318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.412 [2024-05-12 07:06:33.461596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.412 [2024-05-12 07:06:33.461972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.462142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.462175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.412 [2024-05-12 07:06:33.462193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.412 [2024-05-12 07:06:33.462323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.412 [2024-05-12 07:06:33.462457] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.412 [2024-05-12 07:06:33.462480] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.412 [2024-05-12 07:06:33.462496] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.412 [2024-05-12 07:06:33.464690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.412 [2024-05-12 07:06:33.474235] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.412 [2024-05-12 07:06:33.474599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.474807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.474838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.412 [2024-05-12 07:06:33.474857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.412 [2024-05-12 07:06:33.475060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.412 [2024-05-12 07:06:33.475194] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.412 [2024-05-12 07:06:33.475219] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.412 [2024-05-12 07:06:33.475236] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.412 [2024-05-12 07:06:33.477600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.412 [2024-05-12 07:06:33.486896] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.412 [2024-05-12 07:06:33.487260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.487436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.412 [2024-05-12 07:06:33.487465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.413 [2024-05-12 07:06:33.487484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.413 [2024-05-12 07:06:33.487633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.413 [2024-05-12 07:06:33.487851] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.413 [2024-05-12 07:06:33.487876] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.413 [2024-05-12 07:06:33.487892] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.413 [2024-05-12 07:06:33.490215] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.413 [2024-05-12 07:06:33.499540] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.413 [2024-05-12 07:06:33.499884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.413 [2024-05-12 07:06:33.500045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.413 [2024-05-12 07:06:33.500070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.413 [2024-05-12 07:06:33.500091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.413 [2024-05-12 07:06:33.500341] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.413 [2024-05-12 07:06:33.500459] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.413 [2024-05-12 07:06:33.500484] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.413 [2024-05-12 07:06:33.500499] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.413 [2024-05-12 07:06:33.502770] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.413 [2024-05-12 07:06:33.511896] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.413 [2024-05-12 07:06:33.512275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.413 [2024-05-12 07:06:33.512567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.413 [2024-05-12 07:06:33.512596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.413 [2024-05-12 07:06:33.512615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.413 [2024-05-12 07:06:33.512775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.413 [2024-05-12 07:06:33.512945] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.413 [2024-05-12 07:06:33.512970] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.413 [2024-05-12 07:06:33.512992] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.413 [2024-05-12 07:06:33.515205] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.413 [2024-05-12 07:06:33.524208] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.413 [2024-05-12 07:06:33.524620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.413 [2024-05-12 07:06:33.524846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.413 [2024-05-12 07:06:33.524872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.413 [2024-05-12 07:06:33.524889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.413 [2024-05-12 07:06:33.525008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.413 [2024-05-12 07:06:33.525123] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.413 [2024-05-12 07:06:33.525147] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.413 [2024-05-12 07:06:33.525162] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.413 [2024-05-12 07:06:33.527464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.413 [2024-05-12 07:06:33.536862] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.413 [2024-05-12 07:06:33.537154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.413 [2024-05-12 07:06:33.537390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.413 [2024-05-12 07:06:33.537434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.413 [2024-05-12 07:06:33.537452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.413 [2024-05-12 07:06:33.537589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.413 [2024-05-12 07:06:33.537751] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.413 [2024-05-12 07:06:33.537775] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.413 [2024-05-12 07:06:33.537790] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.673 [2024-05-12 07:06:33.540277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.673 [2024-05-12 07:06:33.549440] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.673 [2024-05-12 07:06:33.549836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.550024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.550054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.673 [2024-05-12 07:06:33.550073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.673 [2024-05-12 07:06:33.550239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.673 [2024-05-12 07:06:33.550409] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.673 [2024-05-12 07:06:33.550434] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.673 [2024-05-12 07:06:33.550450] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.673 [2024-05-12 07:06:33.552814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.673 [2024-05-12 07:06:33.561836] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.673 [2024-05-12 07:06:33.562250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.562460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.562504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.673 [2024-05-12 07:06:33.562523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.673 [2024-05-12 07:06:33.562672] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.673 [2024-05-12 07:06:33.562859] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.673 [2024-05-12 07:06:33.562884] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.673 [2024-05-12 07:06:33.562901] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.673 [2024-05-12 07:06:33.565326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.673 [2024-05-12 07:06:33.574324] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.673 [2024-05-12 07:06:33.574772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.574923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.574949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.673 [2024-05-12 07:06:33.574966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.673 [2024-05-12 07:06:33.575134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.673 [2024-05-12 07:06:33.575328] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.673 [2024-05-12 07:06:33.575354] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.673 [2024-05-12 07:06:33.575370] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.673 [2024-05-12 07:06:33.577586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.673 [2024-05-12 07:06:33.586902] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.673 [2024-05-12 07:06:33.587325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.587499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.587528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.673 [2024-05-12 07:06:33.587545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.673 [2024-05-12 07:06:33.587723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.673 [2024-05-12 07:06:33.587893] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.673 [2024-05-12 07:06:33.587916] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.673 [2024-05-12 07:06:33.587931] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.673 [2024-05-12 07:06:33.590147] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.673 [2024-05-12 07:06:33.599591] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.673 [2024-05-12 07:06:33.599925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.600245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.600286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.673 [2024-05-12 07:06:33.600301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.673 [2024-05-12 07:06:33.600479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.673 [2024-05-12 07:06:33.600594] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.673 [2024-05-12 07:06:33.600618] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.673 [2024-05-12 07:06:33.600634] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.673 [2024-05-12 07:06:33.602829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.673 [2024-05-12 07:06:33.612039] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.673 [2024-05-12 07:06:33.612443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.612636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.612664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.673 [2024-05-12 07:06:33.612682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.673 [2024-05-12 07:06:33.612821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.673 [2024-05-12 07:06:33.613045] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.673 [2024-05-12 07:06:33.613076] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.673 [2024-05-12 07:06:33.613093] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.673 [2024-05-12 07:06:33.615322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.673 [2024-05-12 07:06:33.624793] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.673 [2024-05-12 07:06:33.625229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.625571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.625617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.673 [2024-05-12 07:06:33.625635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.673 [2024-05-12 07:06:33.625780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.673 [2024-05-12 07:06:33.625932] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.673 [2024-05-12 07:06:33.625958] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.673 [2024-05-12 07:06:33.625975] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.673 [2024-05-12 07:06:33.628313] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.673 [2024-05-12 07:06:33.637621] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.673 [2024-05-12 07:06:33.637943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.638169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.638216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.673 [2024-05-12 07:06:33.638235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.673 [2024-05-12 07:06:33.638439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.673 [2024-05-12 07:06:33.638610] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.673 [2024-05-12 07:06:33.638635] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.673 [2024-05-12 07:06:33.638651] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.673 [2024-05-12 07:06:33.640965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.673 [2024-05-12 07:06:33.650105] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.673 [2024-05-12 07:06:33.650438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.673 [2024-05-12 07:06:33.650663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.650693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.674 [2024-05-12 07:06:33.650726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.674 [2024-05-12 07:06:33.650931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.674 [2024-05-12 07:06:33.651082] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.674 [2024-05-12 07:06:33.651108] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.674 [2024-05-12 07:06:33.651130] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.674 [2024-05-12 07:06:33.653326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.674 [2024-05-12 07:06:33.662871] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.674 [2024-05-12 07:06:33.663255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.663527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.663556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.674 [2024-05-12 07:06:33.663575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.674 [2024-05-12 07:06:33.663773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.674 [2024-05-12 07:06:33.663961] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.674 [2024-05-12 07:06:33.663986] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.674 [2024-05-12 07:06:33.664002] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.674 [2024-05-12 07:06:33.666198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.674 [2024-05-12 07:06:33.675394] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.674 [2024-05-12 07:06:33.675779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.676018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.676048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.674 [2024-05-12 07:06:33.676066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.674 [2024-05-12 07:06:33.676159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.674 [2024-05-12 07:06:33.676310] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.674 [2024-05-12 07:06:33.676333] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.674 [2024-05-12 07:06:33.676349] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.674 [2024-05-12 07:06:33.678524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.674 [2024-05-12 07:06:33.687924] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.674 [2024-05-12 07:06:33.688297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.688534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.688577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.674 [2024-05-12 07:06:33.688596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.674 [2024-05-12 07:06:33.688759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.674 [2024-05-12 07:06:33.688966] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.674 [2024-05-12 07:06:33.688992] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.674 [2024-05-12 07:06:33.689008] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.674 [2024-05-12 07:06:33.691222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.674 [2024-05-12 07:06:33.700767] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.674 [2024-05-12 07:06:33.701135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.701344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.701391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.674 [2024-05-12 07:06:33.701409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.674 [2024-05-12 07:06:33.701558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.674 [2024-05-12 07:06:33.701740] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.674 [2024-05-12 07:06:33.701767] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.674 [2024-05-12 07:06:33.701783] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.674 [2024-05-12 07:06:33.704140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.674 [2024-05-12 07:06:33.713329] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.674 [2024-05-12 07:06:33.713679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.713892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.713920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.674 [2024-05-12 07:06:33.713936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.674 [2024-05-12 07:06:33.714070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.674 [2024-05-12 07:06:33.714295] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.674 [2024-05-12 07:06:33.714320] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.674 [2024-05-12 07:06:33.714336] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.674 [2024-05-12 07:06:33.716759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.674 [2024-05-12 07:06:33.725966] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.674 [2024-05-12 07:06:33.726332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.726561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.726587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.674 [2024-05-12 07:06:33.726603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.674 [2024-05-12 07:06:33.726760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.674 [2024-05-12 07:06:33.726900] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.674 [2024-05-12 07:06:33.726925] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.674 [2024-05-12 07:06:33.726942] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.674 [2024-05-12 07:06:33.729497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.674 [2024-05-12 07:06:33.738404] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.674 [2024-05-12 07:06:33.738832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.739016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.739061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.674 [2024-05-12 07:06:33.739080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.674 [2024-05-12 07:06:33.739246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.674 [2024-05-12 07:06:33.739416] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.674 [2024-05-12 07:06:33.739441] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.674 [2024-05-12 07:06:33.739457] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.674 [2024-05-12 07:06:33.741615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.674 [2024-05-12 07:06:33.750919] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.674 [2024-05-12 07:06:33.751374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.751766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.751795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.674 [2024-05-12 07:06:33.751813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.674 [2024-05-12 07:06:33.751961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.674 [2024-05-12 07:06:33.752150] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.674 [2024-05-12 07:06:33.752175] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.674 [2024-05-12 07:06:33.752191] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.674 [2024-05-12 07:06:33.754385] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.674 [2024-05-12 07:06:33.763553] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.674 [2024-05-12 07:06:33.763957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.764186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.674 [2024-05-12 07:06:33.764216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.674 [2024-05-12 07:06:33.764234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.674 [2024-05-12 07:06:33.764401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.674 [2024-05-12 07:06:33.764589] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.674 [2024-05-12 07:06:33.764614] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.675 [2024-05-12 07:06:33.764630] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.675 [2024-05-12 07:06:33.767212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.675 [2024-05-12 07:06:33.775860] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.675 [2024-05-12 07:06:33.776344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.675 [2024-05-12 07:06:33.776685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.675 [2024-05-12 07:06:33.776769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.675 [2024-05-12 07:06:33.776788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.675 [2024-05-12 07:06:33.776937] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.675 [2024-05-12 07:06:33.777125] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.675 [2024-05-12 07:06:33.777150] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.675 [2024-05-12 07:06:33.777167] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.675 [2024-05-12 07:06:33.779492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.675 [2024-05-12 07:06:33.788415] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.675 [2024-05-12 07:06:33.788756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.675 [2024-05-12 07:06:33.789017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.675 [2024-05-12 07:06:33.789043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.675 [2024-05-12 07:06:33.789059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.675 [2024-05-12 07:06:33.789215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.675 [2024-05-12 07:06:33.789417] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.675 [2024-05-12 07:06:33.789443] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.675 [2024-05-12 07:06:33.789460] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.675 [2024-05-12 07:06:33.791955] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.934 [2024-05-12 07:06:33.800958] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.934 [2024-05-12 07:06:33.801363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.934 [2024-05-12 07:06:33.801563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.934 [2024-05-12 07:06:33.801591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.934 [2024-05-12 07:06:33.801610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.934 [2024-05-12 07:06:33.801751] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.934 [2024-05-12 07:06:33.801923] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.934 [2024-05-12 07:06:33.801958] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.934 [2024-05-12 07:06:33.801975] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.934 [2024-05-12 07:06:33.804425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.934 [2024-05-12 07:06:33.813562] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.934 [2024-05-12 07:06:33.813941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.934 [2024-05-12 07:06:33.814154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.934 [2024-05-12 07:06:33.814195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.934 [2024-05-12 07:06:33.814216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.934 [2024-05-12 07:06:33.814360] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.934 [2024-05-12 07:06:33.814525] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.934 [2024-05-12 07:06:33.814550] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.934 [2024-05-12 07:06:33.814567] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.934 [2024-05-12 07:06:33.816971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.934 [2024-05-12 07:06:33.826210] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.934 [2024-05-12 07:06:33.826571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.826790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.826822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.935 [2024-05-12 07:06:33.826841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.935 [2024-05-12 07:06:33.827008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.935 [2024-05-12 07:06:33.827160] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.935 [2024-05-12 07:06:33.827184] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.935 [2024-05-12 07:06:33.827200] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.935 [2024-05-12 07:06:33.829592] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.935 [2024-05-12 07:06:33.838630] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.935 [2024-05-12 07:06:33.838979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.839223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.839255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.935 [2024-05-12 07:06:33.839273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.935 [2024-05-12 07:06:33.839459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.935 [2024-05-12 07:06:33.839628] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.935 [2024-05-12 07:06:33.839653] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.935 [2024-05-12 07:06:33.839669] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.935 [2024-05-12 07:06:33.842118] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.935 [2024-05-12 07:06:33.850996] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.935 [2024-05-12 07:06:33.851301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.851503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.851532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.935 [2024-05-12 07:06:33.851550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.935 [2024-05-12 07:06:33.851718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.935 [2024-05-12 07:06:33.851889] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.935 [2024-05-12 07:06:33.851913] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.935 [2024-05-12 07:06:33.851928] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.935 [2024-05-12 07:06:33.854389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.935 [2024-05-12 07:06:33.863511] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.935 [2024-05-12 07:06:33.863863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.864053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.864083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.935 [2024-05-12 07:06:33.864101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.935 [2024-05-12 07:06:33.864304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.935 [2024-05-12 07:06:33.864475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.935 [2024-05-12 07:06:33.864499] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.935 [2024-05-12 07:06:33.864515] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.935 [2024-05-12 07:06:33.866974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.935 [2024-05-12 07:06:33.876210] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.935 [2024-05-12 07:06:33.876576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.876763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.876793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.935 [2024-05-12 07:06:33.876811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.935 [2024-05-12 07:06:33.876977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.935 [2024-05-12 07:06:33.877147] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.935 [2024-05-12 07:06:33.877171] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.935 [2024-05-12 07:06:33.877188] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.935 [2024-05-12 07:06:33.879492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.935 [2024-05-12 07:06:33.888917] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.935 [2024-05-12 07:06:33.889342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.889575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.889603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.935 [2024-05-12 07:06:33.889621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.935 [2024-05-12 07:06:33.889816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.935 [2024-05-12 07:06:33.889993] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.935 [2024-05-12 07:06:33.890019] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.935 [2024-05-12 07:06:33.890035] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.935 [2024-05-12 07:06:33.892411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.935 [2024-05-12 07:06:33.901274] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.935 [2024-05-12 07:06:33.901675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.901899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.901928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.935 [2024-05-12 07:06:33.901946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.935 [2024-05-12 07:06:33.902112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.935 [2024-05-12 07:06:33.902282] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.935 [2024-05-12 07:06:33.902306] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.935 [2024-05-12 07:06:33.902321] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.935 [2024-05-12 07:06:33.904816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.935 [2024-05-12 07:06:33.913933] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.935 [2024-05-12 07:06:33.914314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.914533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.914559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.935 [2024-05-12 07:06:33.914575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.935 [2024-05-12 07:06:33.914748] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.935 [2024-05-12 07:06:33.914901] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.935 [2024-05-12 07:06:33.914926] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.935 [2024-05-12 07:06:33.914942] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.935 [2024-05-12 07:06:33.917028] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.935 [2024-05-12 07:06:33.926546] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.935 [2024-05-12 07:06:33.926987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.927202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.927227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.935 [2024-05-12 07:06:33.927243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.935 [2024-05-12 07:06:33.927404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.935 [2024-05-12 07:06:33.927556] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.935 [2024-05-12 07:06:33.927585] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.935 [2024-05-12 07:06:33.927602] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.935 [2024-05-12 07:06:33.929914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.935 [2024-05-12 07:06:33.939176] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.935 [2024-05-12 07:06:33.939583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.939838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.935 [2024-05-12 07:06:33.939880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.935 [2024-05-12 07:06:33.939896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.935 [2024-05-12 07:06:33.940077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.935 [2024-05-12 07:06:33.940229] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.935 [2024-05-12 07:06:33.940253] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.936 [2024-05-12 07:06:33.940269] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.936 [2024-05-12 07:06:33.942724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3142910 Killed "${NVMF_APP[@]}" "$@" 00:26:26.936 07:06:33 -- host/bdevperf.sh@36 -- # tgt_init 00:26:26.936 07:06:33 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:26.936 07:06:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:26.936 07:06:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:26.936 07:06:33 -- common/autotest_common.sh@10 -- # set +x 00:26:26.936 07:06:33 -- nvmf/common.sh@469 -- # nvmfpid=3144030 00:26:26.936 07:06:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:26.936 07:06:33 -- nvmf/common.sh@470 -- # waitforlisten 3144030 00:26:26.936 07:06:33 -- common/autotest_common.sh@819 -- # '[' -z 3144030 ']' 00:26:26.936 07:06:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.936 07:06:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:26.936 07:06:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.936 07:06:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:26.936 07:06:33 -- common/autotest_common.sh@10 -- # set +x 00:26:26.936 [2024-05-12 07:06:33.951892] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.936 [2024-05-12 07:06:33.952353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:33.952579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:33.952608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.936 [2024-05-12 07:06:33.952626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.936 [2024-05-12 07:06:33.952813] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.936 [2024-05-12 07:06:33.952967] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.936 [2024-05-12 07:06:33.953004] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.936 [2024-05-12 07:06:33.953022] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.936 [2024-05-12 07:06:33.955382] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.936 [2024-05-12 07:06:33.964421] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.936 [2024-05-12 07:06:33.964773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:33.964959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:33.964986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.936 [2024-05-12 07:06:33.965002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.936 [2024-05-12 07:06:33.965174] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.936 [2024-05-12 07:06:33.965362] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.936 [2024-05-12 07:06:33.965387] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.936 [2024-05-12 07:06:33.965403] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.936 [2024-05-12 07:06:33.967890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.936 [2024-05-12 07:06:33.977069] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.936 [2024-05-12 07:06:33.977420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:33.977608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:33.977634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.936 [2024-05-12 07:06:33.977651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.936 [2024-05-12 07:06:33.977826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.936 [2024-05-12 07:06:33.978016] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.936 [2024-05-12 07:06:33.978042] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.936 [2024-05-12 07:06:33.978058] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.936 [2024-05-12 07:06:33.980440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.936 [2024-05-12 07:06:33.989613] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.936 [2024-05-12 07:06:33.989990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:33.990219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:33.990250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.936 [2024-05-12 07:06:33.990268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.936 [2024-05-12 07:06:33.990417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.936 [2024-05-12 07:06:33.990605] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.936 [2024-05-12 07:06:33.990630] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.936 [2024-05-12 07:06:33.990646] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.936 [2024-05-12 07:06:33.991165] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:26.936 [2024-05-12 07:06:33.991244] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.936 [2024-05-12 07:06:33.992993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.936 [2024-05-12 07:06:34.002104] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.936 [2024-05-12 07:06:34.002530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:34.002719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:34.002746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.936 [2024-05-12 07:06:34.002762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.936 [2024-05-12 07:06:34.002895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.936 [2024-05-12 07:06:34.003047] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.936 [2024-05-12 07:06:34.003072] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.936 [2024-05-12 07:06:34.003088] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.936 [2024-05-12 07:06:34.005335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.936 [2024-05-12 07:06:34.014649] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.936 [2024-05-12 07:06:34.015057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:34.015278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:34.015305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.936 [2024-05-12 07:06:34.015322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.936 [2024-05-12 07:06:34.015509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.936 [2024-05-12 07:06:34.015674] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.936 [2024-05-12 07:06:34.015706] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.936 [2024-05-12 07:06:34.015724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.936 [2024-05-12 07:06:34.018195] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.936 [2024-05-12 07:06:34.027102] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.936 [2024-05-12 07:06:34.027481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:34.027750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:34.027777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.936 [2024-05-12 07:06:34.027793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.936 [2024-05-12 07:06:34.027924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.936 [2024-05-12 07:06:34.028109] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.936 [2024-05-12 07:06:34.028134] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.936 [2024-05-12 07:06:34.028156] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.936 [2024-05-12 07:06:34.030450] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.936 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.936 [2024-05-12 07:06:34.039540] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.936 [2024-05-12 07:06:34.039900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:34.040155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.936 [2024-05-12 07:06:34.040182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.936 [2024-05-12 07:06:34.040198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.936 [2024-05-12 07:06:34.040370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.936 [2024-05-12 07:06:34.040523] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.936 [2024-05-12 07:06:34.040548] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.936 [2024-05-12 07:06:34.040564] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.936 [2024-05-12 07:06:34.042897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.936 [2024-05-12 07:06:34.052109] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.937 [2024-05-12 07:06:34.052496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.937 [2024-05-12 07:06:34.052689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.937 [2024-05-12 07:06:34.052722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:26.937 [2024-05-12 07:06:34.052739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:26.937 [2024-05-12 07:06:34.052900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:26.937 [2024-05-12 07:06:34.053082] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.937 [2024-05-12 07:06:34.053107] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.937 [2024-05-12 07:06:34.053124] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.937 [2024-05-12 07:06:34.055691] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.196 [2024-05-12 07:06:34.064756] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.196 [2024-05-12 07:06:34.065129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.196 [2024-05-12 07:06:34.065200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:27.196 [2024-05-12 07:06:34.065372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.196 [2024-05-12 07:06:34.065401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.196 [2024-05-12 07:06:34.065419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.196 [2024-05-12 07:06:34.065585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.196 [2024-05-12 07:06:34.065776] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.196 [2024-05-12 07:06:34.065799] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.196 [2024-05-12 07:06:34.065819] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.196 [2024-05-12 07:06:34.068135] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.196 [2024-05-12 07:06:34.077246] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.196 [2024-05-12 07:06:34.077851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.196 [2024-05-12 07:06:34.078211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.196 [2024-05-12 07:06:34.078241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.196 [2024-05-12 07:06:34.078264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.196 [2024-05-12 07:06:34.078404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.196 [2024-05-12 07:06:34.078599] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.196 [2024-05-12 07:06:34.078625] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.196 [2024-05-12 07:06:34.078643] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.196 [2024-05-12 07:06:34.080754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.196 [2024-05-12 07:06:34.090021] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.196 [2024-05-12 07:06:34.090410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.196 [2024-05-12 07:06:34.090652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.196 [2024-05-12 07:06:34.090682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.196 [2024-05-12 07:06:34.090708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.196 [2024-05-12 07:06:34.090883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.196 [2024-05-12 07:06:34.091071] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.196 [2024-05-12 07:06:34.091096] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.196 [2024-05-12 07:06:34.091112] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.196 [2024-05-12 07:06:34.093376] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.196 [2024-05-12 07:06:34.102618] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.196 [2024-05-12 07:06:34.103000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.196 [2024-05-12 07:06:34.103224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.196 [2024-05-12 07:06:34.103254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.196 [2024-05-12 07:06:34.103273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.196 [2024-05-12 07:06:34.103439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.196 [2024-05-12 07:06:34.103611] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.196 [2024-05-12 07:06:34.103636] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.196 [2024-05-12 07:06:34.103652] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.196 [2024-05-12 07:06:34.106112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.197 [2024-05-12 07:06:34.115580] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.197 [2024-05-12 07:06:34.116082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.116284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.116313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.197 [2024-05-12 07:06:34.116330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.197 [2024-05-12 07:06:34.116496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.197 [2024-05-12 07:06:34.116685] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.197 [2024-05-12 07:06:34.116721] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.197 [2024-05-12 07:06:34.116738] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.197 [2024-05-12 07:06:34.119140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.197 [2024-05-12 07:06:34.128072] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.197 [2024-05-12 07:06:34.128535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.128777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.128807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.197 [2024-05-12 07:06:34.128827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.197 [2024-05-12 07:06:34.129018] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.197 [2024-05-12 07:06:34.129191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.197 [2024-05-12 07:06:34.129217] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.197 [2024-05-12 07:06:34.129235] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.197 [2024-05-12 07:06:34.131469] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.197 [2024-05-12 07:06:34.140522] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.197 [2024-05-12 07:06:34.141042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.141361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.141391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.197 [2024-05-12 07:06:34.141412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.197 [2024-05-12 07:06:34.141567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.197 [2024-05-12 07:06:34.141732] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.197 [2024-05-12 07:06:34.141759] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.197 [2024-05-12 07:06:34.141778] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.197 [2024-05-12 07:06:34.144050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.197 [2024-05-12 07:06:34.152964] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.197 [2024-05-12 07:06:34.153366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.153620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.153646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.197 [2024-05-12 07:06:34.153663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.197 [2024-05-12 07:06:34.153866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.197 [2024-05-12 07:06:34.154020] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.197 [2024-05-12 07:06:34.154045] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.197 [2024-05-12 07:06:34.154061] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.197 [2024-05-12 07:06:34.156436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.197 [2024-05-12 07:06:34.165567] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.197 [2024-05-12 07:06:34.165981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.166199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.166226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.197 [2024-05-12 07:06:34.166243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.197 [2024-05-12 07:06:34.166410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.197 [2024-05-12 07:06:34.166583] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.197 [2024-05-12 07:06:34.166608] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.197 [2024-05-12 07:06:34.166625] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.197 [2024-05-12 07:06:34.168816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.197 [2024-05-12 07:06:34.178193] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.197 [2024-05-12 07:06:34.178563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.178774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.178801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.197 [2024-05-12 07:06:34.178821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.197 [2024-05-12 07:06:34.178986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.197 [2024-05-12 07:06:34.179177] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.197 [2024-05-12 07:06:34.179202] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.197 [2024-05-12 07:06:34.179219] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.197 [2024-05-12 07:06:34.181534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.197 [2024-05-12 07:06:34.181733] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:27.197 [2024-05-12 07:06:34.181874] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.197 [2024-05-12 07:06:34.181900] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.197 [2024-05-12 07:06:34.181916] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.197 [2024-05-12 07:06:34.181997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.197 [2024-05-12 07:06:34.182095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:27.197 [2024-05-12 07:06:34.182098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.197 [2024-05-12 07:06:34.190454] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.197 [2024-05-12 07:06:34.190965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.191180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.191207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.197 [2024-05-12 07:06:34.191226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.197 [2024-05-12 07:06:34.191383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.197 [2024-05-12 07:06:34.191551] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.197 [2024-05-12 07:06:34.191573] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.197 [2024-05-12 07:06:34.191589] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.197 [2024-05-12 07:06:34.193658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.197 [2024-05-12 07:06:34.202734] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.197 [2024-05-12 07:06:34.203371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.203555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.203582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.197 [2024-05-12 07:06:34.203602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.197 [2024-05-12 07:06:34.203757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.197 [2024-05-12 07:06:34.203916] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.197 [2024-05-12 07:06:34.203939] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.197 [2024-05-12 07:06:34.203957] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.197 [2024-05-12 07:06:34.206090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.197 [2024-05-12 07:06:34.215182] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.197 [2024-05-12 07:06:34.215686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.215874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.197 [2024-05-12 07:06:34.215901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.197 [2024-05-12 07:06:34.215922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.197 [2024-05-12 07:06:34.216033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.197 [2024-05-12 07:06:34.216225] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.197 [2024-05-12 07:06:34.216257] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.197 [2024-05-12 07:06:34.216276] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.197 [2024-05-12 07:06:34.218309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.197 [2024-05-12 07:06:34.227612] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.198 [2024-05-12 07:06:34.228122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.228363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.228390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.198 [2024-05-12 07:06:34.228410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.198 [2024-05-12 07:06:34.228553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.198 [2024-05-12 07:06:34.228765] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.198 [2024-05-12 07:06:34.228789] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.198 [2024-05-12 07:06:34.228808] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.198 [2024-05-12 07:06:34.230691] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.198 [2024-05-12 07:06:34.240052] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.198 [2024-05-12 07:06:34.240540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.240727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.240754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.198 [2024-05-12 07:06:34.240774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.198 [2024-05-12 07:06:34.240964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.198 [2024-05-12 07:06:34.241144] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.198 [2024-05-12 07:06:34.241165] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.198 [2024-05-12 07:06:34.241181] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.198 [2024-05-12 07:06:34.243259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.198 [2024-05-12 07:06:34.252312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.198 [2024-05-12 07:06:34.252800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.252989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.253018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.198 [2024-05-12 07:06:34.253038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.198 [2024-05-12 07:06:34.253264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.198 [2024-05-12 07:06:34.253413] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.198 [2024-05-12 07:06:34.253435] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.198 [2024-05-12 07:06:34.253461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.198 [2024-05-12 07:06:34.255663] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.198 [2024-05-12 07:06:34.264495] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.198 [2024-05-12 07:06:34.264863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.265106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.265132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.198 [2024-05-12 07:06:34.265148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.198 [2024-05-12 07:06:34.265296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.198 [2024-05-12 07:06:34.265489] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.198 [2024-05-12 07:06:34.265510] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.198 [2024-05-12 07:06:34.265524] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.198 [2024-05-12 07:06:34.267497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.198 [2024-05-12 07:06:34.276712] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.198 [2024-05-12 07:06:34.277079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.277281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.277316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.198 [2024-05-12 07:06:34.277332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.198 [2024-05-12 07:06:34.277525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.198 [2024-05-12 07:06:34.277685] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.198 [2024-05-12 07:06:34.277739] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.198 [2024-05-12 07:06:34.277755] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.198 [2024-05-12 07:06:34.279841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.198 [2024-05-12 07:06:34.289239] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.198 [2024-05-12 07:06:34.289684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.289886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.289914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.198 [2024-05-12 07:06:34.289931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.198 [2024-05-12 07:06:34.290111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.198 [2024-05-12 07:06:34.290301] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.198 [2024-05-12 07:06:34.290322] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.198 [2024-05-12 07:06:34.290336] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.198 [2024-05-12 07:06:34.292340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.198 [2024-05-12 07:06:34.301417] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.198 [2024-05-12 07:06:34.301758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.301934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.301960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.198 [2024-05-12 07:06:34.301976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.198 [2024-05-12 07:06:34.302157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.198 [2024-05-12 07:06:34.302288] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.198 [2024-05-12 07:06:34.302309] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.198 [2024-05-12 07:06:34.302323] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.198 [2024-05-12 07:06:34.304487] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.198 [2024-05-12 07:06:34.313871] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.198 [2024-05-12 07:06:34.314240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.314433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.198 [2024-05-12 07:06:34.314459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.198 [2024-05-12 07:06:34.314475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.198 [2024-05-12 07:06:34.314639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.198 [2024-05-12 07:06:34.314861] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.198 [2024-05-12 07:06:34.314884] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.198 [2024-05-12 07:06:34.314899] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.198 [2024-05-12 07:06:34.316999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.459 [2024-05-12 07:06:34.326015] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.459 [2024-05-12 07:06:34.326321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.326531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.326557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.459 [2024-05-12 07:06:34.326573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.459 [2024-05-12 07:06:34.326690] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.459 [2024-05-12 07:06:34.326849] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.459 [2024-05-12 07:06:34.326872] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.459 [2024-05-12 07:06:34.326886] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.459 [2024-05-12 07:06:34.329005] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.459 [2024-05-12 07:06:34.338301] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.459 [2024-05-12 07:06:34.338754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.338914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.338940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.459 [2024-05-12 07:06:34.338956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.459 [2024-05-12 07:06:34.339137] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.459 [2024-05-12 07:06:34.339282] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.459 [2024-05-12 07:06:34.339303] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.459 [2024-05-12 07:06:34.339316] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.459 [2024-05-12 07:06:34.341552] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.459 [2024-05-12 07:06:34.350595] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.459 [2024-05-12 07:06:34.350891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.351075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.351101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.459 [2024-05-12 07:06:34.351117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.459 [2024-05-12 07:06:34.351281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.459 [2024-05-12 07:06:34.351427] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.459 [2024-05-12 07:06:34.351449] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.459 [2024-05-12 07:06:34.351463] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.459 [2024-05-12 07:06:34.353551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.459 [2024-05-12 07:06:34.362757] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.459 [2024-05-12 07:06:34.363079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.363286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.363311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.459 [2024-05-12 07:06:34.363327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.459 [2024-05-12 07:06:34.363444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.459 [2024-05-12 07:06:34.363607] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.459 [2024-05-12 07:06:34.363628] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.459 [2024-05-12 07:06:34.363642] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.459 [2024-05-12 07:06:34.365586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.459 [2024-05-12 07:06:34.374943] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.459 [2024-05-12 07:06:34.375299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.375485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.375512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.459 [2024-05-12 07:06:34.375528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.459 [2024-05-12 07:06:34.375694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.459 [2024-05-12 07:06:34.375883] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.459 [2024-05-12 07:06:34.375905] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.459 [2024-05-12 07:06:34.375919] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.459 [2024-05-12 07:06:34.378121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.459 [2024-05-12 07:06:34.387227] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.459 [2024-05-12 07:06:34.387588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.387780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.387806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.459 [2024-05-12 07:06:34.387823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.459 [2024-05-12 07:06:34.388019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.459 [2024-05-12 07:06:34.388179] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.459 [2024-05-12 07:06:34.388201] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.459 [2024-05-12 07:06:34.388215] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.459 [2024-05-12 07:06:34.390522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.459 [2024-05-12 07:06:34.399496] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.459 [2024-05-12 07:06:34.399830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-05-12 07:06:34.400017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.400043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.460 [2024-05-12 07:06:34.400060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.460 [2024-05-12 07:06:34.400194] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.460 [2024-05-12 07:06:34.400371] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.460 [2024-05-12 07:06:34.400392] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.460 [2024-05-12 07:06:34.400406] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.460 [2024-05-12 07:06:34.402245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.460 [2024-05-12 07:06:34.411762] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.460 [2024-05-12 07:06:34.412084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.412245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.412275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.460 [2024-05-12 07:06:34.412292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.460 [2024-05-12 07:06:34.412457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.460 [2024-05-12 07:06:34.412649] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.460 [2024-05-12 07:06:34.412671] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.460 [2024-05-12 07:06:34.412710] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.460 [2024-05-12 07:06:34.414941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.460 [2024-05-12 07:06:34.423861] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.460 [2024-05-12 07:06:34.424228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.424426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.424452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.460 [2024-05-12 07:06:34.424468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.460 [2024-05-12 07:06:34.424647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.460 [2024-05-12 07:06:34.424836] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.460 [2024-05-12 07:06:34.424858] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.460 [2024-05-12 07:06:34.424872] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.460 [2024-05-12 07:06:34.426974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.460 [2024-05-12 07:06:34.435919] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.460 [2024-05-12 07:06:34.436363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.436546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.436572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.460 [2024-05-12 07:06:34.436588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.460 [2024-05-12 07:06:34.436745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.460 [2024-05-12 07:06:34.436989] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.460 [2024-05-12 07:06:34.437017] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.460 [2024-05-12 07:06:34.437031] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.460 [2024-05-12 07:06:34.439036] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.460 [2024-05-12 07:06:34.448270] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.460 [2024-05-12 07:06:34.448620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.448795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.448822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.460 [2024-05-12 07:06:34.448843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.460 [2024-05-12 07:06:34.448962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.460 [2024-05-12 07:06:34.449154] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.460 [2024-05-12 07:06:34.449175] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.460 [2024-05-12 07:06:34.449189] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.460 [2024-05-12 07:06:34.451273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.460 [2024-05-12 07:06:34.460481] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.460 [2024-05-12 07:06:34.460860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.461052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.461078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.460 [2024-05-12 07:06:34.461094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.460 [2024-05-12 07:06:34.461289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.460 [2024-05-12 07:06:34.461463] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.460 [2024-05-12 07:06:34.461484] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.460 [2024-05-12 07:06:34.461498] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.460 [2024-05-12 07:06:34.463603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.460 [2024-05-12 07:06:34.473038] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.460 [2024-05-12 07:06:34.473415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.473608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.473634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.460 [2024-05-12 07:06:34.473650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.460 [2024-05-12 07:06:34.473794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.460 [2024-05-12 07:06:34.473965] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.460 [2024-05-12 07:06:34.473988] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.460 [2024-05-12 07:06:34.474002] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.460 [2024-05-12 07:06:34.476090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.460 [2024-05-12 07:06:34.485308] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.460 [2024-05-12 07:06:34.485645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.485809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.485836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.460 [2024-05-12 07:06:34.485852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.460 [2024-05-12 07:06:34.486007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.460 [2024-05-12 07:06:34.486199] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.460 [2024-05-12 07:06:34.486220] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.460 [2024-05-12 07:06:34.486233] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.460 [2024-05-12 07:06:34.488236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.460 [2024-05-12 07:06:34.497664] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.460 [2024-05-12 07:06:34.498027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.498207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.498233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.460 [2024-05-12 07:06:34.498249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.460 [2024-05-12 07:06:34.498445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.460 [2024-05-12 07:06:34.498636] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.460 [2024-05-12 07:06:34.498658] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.460 [2024-05-12 07:06:34.498686] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.460 [2024-05-12 07:06:34.500621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.460 [2024-05-12 07:06:34.509920] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.460 [2024-05-12 07:06:34.510318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.510465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-05-12 07:06:34.510491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.460 [2024-05-12 07:06:34.510507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.460 [2024-05-12 07:06:34.510671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.460 [2024-05-12 07:06:34.510844] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.460 [2024-05-12 07:06:34.510866] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.461 [2024-05-12 07:06:34.510880] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.461 [2024-05-12 07:06:34.512948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.461 [2024-05-12 07:06:34.522187] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.461 [2024-05-12 07:06:34.522550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-05-12 07:06:34.522716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-05-12 07:06:34.522742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.461 [2024-05-12 07:06:34.522758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.461 [2024-05-12 07:06:34.522908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.461 [2024-05-12 07:06:34.523092] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.461 [2024-05-12 07:06:34.523114] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.461 [2024-05-12 07:06:34.523127] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.461 [2024-05-12 07:06:34.525118] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.461 [2024-05-12 07:06:34.534462] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.461 [2024-05-12 07:06:34.534841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-05-12 07:06:34.535040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-05-12 07:06:34.535067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.461 [2024-05-12 07:06:34.535084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.461 [2024-05-12 07:06:34.535280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.461 [2024-05-12 07:06:34.535410] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.461 [2024-05-12 07:06:34.535431] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.461 [2024-05-12 07:06:34.535444] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.461 [2024-05-12 07:06:34.537476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.461 [2024-05-12 07:06:34.546592] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.461 [2024-05-12 07:06:34.546934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-05-12 07:06:34.547172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-05-12 07:06:34.547200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.461 [2024-05-12 07:06:34.547216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.461 [2024-05-12 07:06:34.547348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.461 [2024-05-12 07:06:34.547478] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.461 [2024-05-12 07:06:34.547499] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.461 [2024-05-12 07:06:34.547513] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.461 [2024-05-12 07:06:34.549532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.461 [2024-05-12 07:06:34.559052] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.461 [2024-05-12 07:06:34.559407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-05-12 07:06:34.559552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-05-12 07:06:34.559576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.461 [2024-05-12 07:06:34.559607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.461 [2024-05-12 07:06:34.559809] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.461 [2024-05-12 07:06:34.559973] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.461 [2024-05-12 07:06:34.560021] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.461 [2024-05-12 07:06:34.560036] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.461 [2024-05-12 07:06:34.562168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.461 [2024-05-12 07:06:34.571382] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.461 [2024-05-12 07:06:34.571741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-05-12 07:06:34.571900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-05-12 07:06:34.571925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.461 [2024-05-12 07:06:34.571941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.461 [2024-05-12 07:06:34.572075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.461 [2024-05-12 07:06:34.572237] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.461 [2024-05-12 07:06:34.572258] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.461 [2024-05-12 07:06:34.572272] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.461 [2024-05-12 07:06:34.574403] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.461 [2024-05-12 07:06:34.583661] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.461 [2024-05-12 07:06:34.584087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-05-12 07:06:34.584268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-05-12 07:06:34.584294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.461 [2024-05-12 07:06:34.584310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.461 [2024-05-12 07:06:34.584444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.461 [2024-05-12 07:06:34.584591] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.461 [2024-05-12 07:06:34.584612] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.461 [2024-05-12 07:06:34.584626] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.720 [2024-05-12 07:06:34.586564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.720 [2024-05-12 07:06:34.595865] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.720 [2024-05-12 07:06:34.596256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.720 [2024-05-12 07:06:34.596456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.720 [2024-05-12 07:06:34.596482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.720 [2024-05-12 07:06:34.596498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.720 [2024-05-12 07:06:34.596741] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.720 [2024-05-12 07:06:34.596915] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.720 [2024-05-12 07:06:34.596937] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.720 [2024-05-12 07:06:34.596957] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.720 [2024-05-12 07:06:34.599022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.720 [2024-05-12 07:06:34.608133] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.720 [2024-05-12 07:06:34.608488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.720 [2024-05-12 07:06:34.608640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.720 [2024-05-12 07:06:34.608667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.720 [2024-05-12 07:06:34.608694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.720 [2024-05-12 07:06:34.608868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.720 [2024-05-12 07:06:34.609007] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.720 [2024-05-12 07:06:34.609029] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.720 [2024-05-12 07:06:34.609057] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.720 [2024-05-12 07:06:34.611232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.720 [2024-05-12 07:06:34.620407] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.720 [2024-05-12 07:06:34.620833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.720 [2024-05-12 07:06:34.621011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.720 [2024-05-12 07:06:34.621038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.720 [2024-05-12 07:06:34.621055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.721 [2024-05-12 07:06:34.621251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.721 [2024-05-12 07:06:34.621381] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.721 [2024-05-12 07:06:34.621403] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.721 [2024-05-12 07:06:34.621417] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.721 [2024-05-12 07:06:34.623489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.721 [2024-05-12 07:06:34.632870] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.721 [2024-05-12 07:06:34.633233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.633444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.633470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.721 [2024-05-12 07:06:34.633487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.721 [2024-05-12 07:06:34.633652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.721 [2024-05-12 07:06:34.633842] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.721 [2024-05-12 07:06:34.633865] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.721 [2024-05-12 07:06:34.633878] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.721 [2024-05-12 07:06:34.635877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.721 [2024-05-12 07:06:34.645151] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.721 [2024-05-12 07:06:34.645538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.645750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.645779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.721 [2024-05-12 07:06:34.645796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.721 [2024-05-12 07:06:34.645961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.721 [2024-05-12 07:06:34.646170] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.721 [2024-05-12 07:06:34.646191] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.721 [2024-05-12 07:06:34.646204] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.721 [2024-05-12 07:06:34.648288] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.721 [2024-05-12 07:06:34.657366] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.721 [2024-05-12 07:06:34.657791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.657969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.657995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.721 [2024-05-12 07:06:34.658011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.721 [2024-05-12 07:06:34.658161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.721 [2024-05-12 07:06:34.658354] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.721 [2024-05-12 07:06:34.658376] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.721 [2024-05-12 07:06:34.658390] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.721 [2024-05-12 07:06:34.660400] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.721 [2024-05-12 07:06:34.669708] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.721 [2024-05-12 07:06:34.670013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.670169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.670195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.721 [2024-05-12 07:06:34.670212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.721 [2024-05-12 07:06:34.670361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.721 [2024-05-12 07:06:34.670554] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.721 [2024-05-12 07:06:34.670575] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.721 [2024-05-12 07:06:34.670589] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.721 [2024-05-12 07:06:34.672556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.721 [2024-05-12 07:06:34.681924] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.721 [2024-05-12 07:06:34.682282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.682451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.682478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.721 [2024-05-12 07:06:34.682494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.721 [2024-05-12 07:06:34.682641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.721 [2024-05-12 07:06:34.682819] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.721 [2024-05-12 07:06:34.682842] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.721 [2024-05-12 07:06:34.682857] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.721 [2024-05-12 07:06:34.685097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.721 [2024-05-12 07:06:34.694268] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.721 [2024-05-12 07:06:34.694674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.694895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.694922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.721 [2024-05-12 07:06:34.694938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.721 [2024-05-12 07:06:34.695087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.721 [2024-05-12 07:06:34.695295] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.721 [2024-05-12 07:06:34.695317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.721 [2024-05-12 07:06:34.695331] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.721 [2024-05-12 07:06:34.697436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.721 [2024-05-12 07:06:34.706368] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.721 [2024-05-12 07:06:34.706741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.706925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.706952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.721 [2024-05-12 07:06:34.706969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.721 [2024-05-12 07:06:34.707134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.721 [2024-05-12 07:06:34.707296] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.721 [2024-05-12 07:06:34.707317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.721 [2024-05-12 07:06:34.707331] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.721 [2024-05-12 07:06:34.709420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.721 [2024-05-12 07:06:34.718741] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.721 [2024-05-12 07:06:34.719070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.719235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.719262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.721 [2024-05-12 07:06:34.719278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.721 [2024-05-12 07:06:34.719490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.721 [2024-05-12 07:06:34.719620] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.721 [2024-05-12 07:06:34.719641] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.721 [2024-05-12 07:06:34.719654] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.721 [2024-05-12 07:06:34.721807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.721 [2024-05-12 07:06:34.731063] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.721 [2024-05-12 07:06:34.731389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.731597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.721 [2024-05-12 07:06:34.731623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.721 [2024-05-12 07:06:34.731639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.721 [2024-05-12 07:06:34.731814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.721 [2024-05-12 07:06:34.731985] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.721 [2024-05-12 07:06:34.732007] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.721 [2024-05-12 07:06:34.732021] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.722 [2024-05-12 07:06:34.734209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.722 [2024-05-12 07:06:34.743358] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.722 [2024-05-12 07:06:34.743740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.743891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.743919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.722 [2024-05-12 07:06:34.743936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.722 [2024-05-12 07:06:34.744113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.722 [2024-05-12 07:06:34.744273] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.722 [2024-05-12 07:06:34.744294] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.722 [2024-05-12 07:06:34.744307] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.722 [2024-05-12 07:06:34.746350] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.722 [2024-05-12 07:06:34.755547] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.722 [2024-05-12 07:06:34.755911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.756098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.756125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.722 [2024-05-12 07:06:34.756146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.722 [2024-05-12 07:06:34.756264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.722 [2024-05-12 07:06:34.756425] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.722 [2024-05-12 07:06:34.756449] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.722 [2024-05-12 07:06:34.756463] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.722 [2024-05-12 07:06:34.758654] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.722 [2024-05-12 07:06:34.767954] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.722 [2024-05-12 07:06:34.768306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.768489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.768517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.722 [2024-05-12 07:06:34.768533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.722 [2024-05-12 07:06:34.768709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.722 [2024-05-12 07:06:34.768881] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.722 [2024-05-12 07:06:34.768903] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.722 [2024-05-12 07:06:34.768917] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.722 [2024-05-12 07:06:34.770833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.722 [2024-05-12 07:06:34.780119] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.722 [2024-05-12 07:06:34.780476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.780661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.780708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.722 [2024-05-12 07:06:34.780726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.722 [2024-05-12 07:06:34.780827] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.722 [2024-05-12 07:06:34.780996] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.722 [2024-05-12 07:06:34.781022] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.722 [2024-05-12 07:06:34.781036] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.722 [2024-05-12 07:06:34.783129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.722 [2024-05-12 07:06:34.792410] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.722 [2024-05-12 07:06:34.792842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.793017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.793043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.722 [2024-05-12 07:06:34.793059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.722 [2024-05-12 07:06:34.793181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.722 [2024-05-12 07:06:34.793341] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.722 [2024-05-12 07:06:34.793363] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.722 [2024-05-12 07:06:34.793376] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.722 [2024-05-12 07:06:34.795294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.722 [2024-05-12 07:06:34.804792] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.722 [2024-05-12 07:06:34.805130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.805277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.805303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.722 [2024-05-12 07:06:34.805319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.722 [2024-05-12 07:06:34.805483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.722 [2024-05-12 07:06:34.805676] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.722 [2024-05-12 07:06:34.805732] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.722 [2024-05-12 07:06:34.805748] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.722 [2024-05-12 07:06:34.807673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.722 [2024-05-12 07:06:34.816954] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.722 [2024-05-12 07:06:34.817373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.817584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.817610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.722 [2024-05-12 07:06:34.817626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.722 [2024-05-12 07:06:34.817756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.722 [2024-05-12 07:06:34.817894] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.722 [2024-05-12 07:06:34.817917] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.722 [2024-05-12 07:06:34.817931] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.722 [2024-05-12 07:06:34.819866] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.722 [2024-05-12 07:06:34.829287] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.722 [2024-05-12 07:06:34.829637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.829819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.829847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.722 [2024-05-12 07:06:34.829864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.722 [2024-05-12 07:06:34.830043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.722 [2024-05-12 07:06:34.830223] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.722 [2024-05-12 07:06:34.830245] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.722 [2024-05-12 07:06:34.830258] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.722 [2024-05-12 07:06:34.832346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.722 [2024-05-12 07:06:34.841693] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.722 [2024-05-12 07:06:34.842053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.842259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.722 [2024-05-12 07:06:34.842285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.722 [2024-05-12 07:06:34.842301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.722 [2024-05-12 07:06:34.842466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.722 [2024-05-12 07:06:34.842643] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.722 [2024-05-12 07:06:34.842664] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.722 [2024-05-12 07:06:34.842678] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.722 [2024-05-12 07:06:34.844905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.983 [2024-05-12 07:06:34.853936] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.983 [2024-05-12 07:06:34.854357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.983 [2024-05-12 07:06:34.854540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.983 [2024-05-12 07:06:34.854566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.983 [2024-05-12 07:06:34.854582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.983 [2024-05-12 07:06:34.854757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.983 [2024-05-12 07:06:34.854928] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.983 [2024-05-12 07:06:34.854950] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.983 [2024-05-12 07:06:34.854964] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.983 [2024-05-12 07:06:34.857250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.983 [2024-05-12 07:06:34.866371] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.983 [2024-05-12 07:06:34.866763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.983 [2024-05-12 07:06:34.866946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.983 [2024-05-12 07:06:34.866972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.983 [2024-05-12 07:06:34.866999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.983 [2024-05-12 07:06:34.867180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.983 [2024-05-12 07:06:34.867324] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.983 [2024-05-12 07:06:34.867353] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.983 [2024-05-12 07:06:34.867368] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.983 [2024-05-12 07:06:34.869307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.983 [2024-05-12 07:06:34.878582] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.983 [2024-05-12 07:06:34.878962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.983 [2024-05-12 07:06:34.879139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.983 [2024-05-12 07:06:34.879166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.983 [2024-05-12 07:06:34.879182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.983 [2024-05-12 07:06:34.879282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.983 [2024-05-12 07:06:34.879493] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.983 [2024-05-12 07:06:34.879514] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.983 [2024-05-12 07:06:34.879528] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.983 [2024-05-12 07:06:34.881539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.983 [2024-05-12 07:06:34.890876] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.983 [2024-05-12 07:06:34.891179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.983 [2024-05-12 07:06:34.891355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.983 [2024-05-12 07:06:34.891381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.983 [2024-05-12 07:06:34.891397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.983 [2024-05-12 07:06:34.891577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.983 [2024-05-12 07:06:34.891764] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.983 [2024-05-12 07:06:34.891787] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.983 [2024-05-12 07:06:34.891802] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.983 [2024-05-12 07:06:34.893926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.984 [2024-05-12 07:06:34.903115] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.984 [2024-05-12 07:06:34.903443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.903608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.903634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.984 [2024-05-12 07:06:34.903650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.984 [2024-05-12 07:06:34.903841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.984 [2024-05-12 07:06:34.903959] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.984 [2024-05-12 07:06:34.903988] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.984 [2024-05-12 07:06:34.904022] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.984 [2024-05-12 07:06:34.906081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.984 [2024-05-12 07:06:34.915460] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.984 [2024-05-12 07:06:34.915835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.916020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.916046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.984 [2024-05-12 07:06:34.916062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.984 [2024-05-12 07:06:34.916211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.984 [2024-05-12 07:06:34.916379] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.984 [2024-05-12 07:06:34.916401] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.984 [2024-05-12 07:06:34.916416] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.984 [2024-05-12 07:06:34.918506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.984 07:06:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:27.984 07:06:34 -- common/autotest_common.sh@852 -- # return 0 00:26:27.984 07:06:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:27.984 07:06:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:27.984 07:06:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.984 [2024-05-12 07:06:34.927618] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.984 [2024-05-12 07:06:34.927944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.928121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.928148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.984 [2024-05-12 07:06:34.928165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.984 [2024-05-12 07:06:34.928329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.984 [2024-05-12 07:06:34.928444] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.984 [2024-05-12 07:06:34.928466] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.984 [2024-05-12 07:06:34.928481] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.984 [2024-05-12 07:06:34.930558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.984 [2024-05-12 07:06:34.939950] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.984 [2024-05-12 07:06:34.940301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.940452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.940479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.984 [2024-05-12 07:06:34.940495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.984 [2024-05-12 07:06:34.940678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.984 [2024-05-12 07:06:34.940841] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.984 [2024-05-12 07:06:34.940869] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.984 [2024-05-12 07:06:34.940884] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.984 [2024-05-12 07:06:34.943029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.984 07:06:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.984 07:06:34 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:27.984 07:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.984 07:06:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.984 [2024-05-12 07:06:34.950969] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.984 [2024-05-12 07:06:34.952205] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.984 [2024-05-12 07:06:34.952631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.952810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.952838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.984 [2024-05-12 07:06:34.952855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.984 [2024-05-12 07:06:34.953036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.984 [2024-05-12 07:06:34.953214] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.984 [2024-05-12 07:06:34.953235] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.984 [2024-05-12 07:06:34.953249] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.984 [2024-05-12 07:06:34.955368] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.984 07:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.984 07:06:34 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:27.984 07:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.984 07:06:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.984 [2024-05-12 07:06:34.964320] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.984 [2024-05-12 07:06:34.964761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.964949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.964985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.984 [2024-05-12 07:06:34.965002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.984 [2024-05-12 07:06:34.965196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.984 [2024-05-12 07:06:34.965337] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.984 [2024-05-12 07:06:34.965357] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.984 [2024-05-12 07:06:34.965370] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.984 [2024-05-12 07:06:34.967340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.984 [2024-05-12 07:06:34.976609] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.984 [2024-05-12 07:06:34.977148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.977314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.977349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.984 [2024-05-12 07:06:34.977370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.984 [2024-05-12 07:06:34.977499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.984 [2024-05-12 07:06:34.977722] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.984 [2024-05-12 07:06:34.977747] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.984 [2024-05-12 07:06:34.977765] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.984 [2024-05-12 07:06:34.979737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.984 Malloc0 00:26:27.984 07:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.984 [2024-05-12 07:06:34.989183] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.984 07:06:34 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:27.984 07:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.984 07:06:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.984 [2024-05-12 07:06:34.989646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.989845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.984 [2024-05-12 07:06:34.989873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.984 [2024-05-12 07:06:34.989892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.984 [2024-05-12 07:06:34.990098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.984 [2024-05-12 07:06:34.990257] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.984 [2024-05-12 07:06:34.990281] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.984 [2024-05-12 07:06:34.990299] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.984 [2024-05-12 07:06:34.992409] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.984 07:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.984 07:06:34 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:27.984 07:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.984 07:06:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.984 [2024-05-12 07:06:35.001424] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.984 [2024-05-12 07:06:35.001848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.985 [2024-05-12 07:06:35.002038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.985 [2024-05-12 07:06:35.002065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:27.985 [2024-05-12 07:06:35.002082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:27.985 [2024-05-12 07:06:35.002248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:27.985 [2024-05-12 07:06:35.002425] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.985 [2024-05-12 07:06:35.002448] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.985 [2024-05-12 07:06:35.002463] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.985 [2024-05-12 07:06:35.004587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.985 07:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.985 07:06:35 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:27.985 07:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.985 07:06:35 -- common/autotest_common.sh@10 -- # set +x 00:26:27.985 [2024-05-12 07:06:35.008519] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.985 07:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.985 07:06:35 -- host/bdevperf.sh@38 -- # wait 3143335 00:26:27.985 [2024-05-12 07:06:35.014043] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.985 [2024-05-12 07:06:35.045301] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:37.964 00:26:37.964 Latency(us) 00:26:37.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.964 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:37.964 Verification LBA range: start 0x0 length 0x4000 00:26:37.964 Nvme1n1 : 15.01 9561.30 37.35 15362.83 0.00 5120.62 873.81 20194.80 00:26:37.964 =================================================================================================================== 00:26:37.964 Total : 9561.30 37.35 15362.83 0.00 5120.62 873.81 20194.80 00:26:37.964 07:06:43 -- host/bdevperf.sh@39 -- # sync 00:26:37.964 07:06:43 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:37.964 07:06:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:37.964 07:06:43 -- common/autotest_common.sh@10 -- # set +x 00:26:37.964 07:06:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:37.964 07:06:43 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:37.964 07:06:43 -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:37.964 07:06:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:37.964 07:06:43 -- nvmf/common.sh@116 -- # sync 00:26:37.964 07:06:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:37.964 07:06:43 -- nvmf/common.sh@119 -- # set +e 00:26:37.964 07:06:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:37.964 07:06:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:37.964 rmmod nvme_tcp 00:26:37.964 rmmod nvme_fabrics 00:26:37.964 rmmod nvme_keyring 00:26:37.964 07:06:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:37.964 07:06:43 -- nvmf/common.sh@123 -- # set -e 00:26:37.964 07:06:43 -- nvmf/common.sh@124 -- # return 0 00:26:37.964 07:06:43 -- nvmf/common.sh@477 -- # '[' -n 3144030 ']' 00:26:37.964 07:06:43 -- nvmf/common.sh@478 -- # killprocess 3144030 00:26:37.964 07:06:43 -- common/autotest_common.sh@926 -- # '[' -z 3144030 ']' 00:26:37.964 07:06:43 -- common/autotest_common.sh@930 -- # kill -0 3144030 00:26:37.964 07:06:43 -- common/autotest_common.sh@931 -- # uname 00:26:37.964 07:06:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:37.964 07:06:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3144030 00:26:37.964 07:06:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:37.964 07:06:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:37.964 07:06:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3144030' 00:26:37.964 killing process with pid 3144030 00:26:37.964 07:06:43 -- common/autotest_common.sh@945 -- # kill 3144030 00:26:37.964 07:06:43 -- common/autotest_common.sh@950 -- # wait 3144030 00:26:37.964 07:06:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:37.964 07:06:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:37.964 07:06:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:37.964 07:06:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:37.964 07:06:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:37.964 07:06:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.964 07:06:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.964 07:06:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.340 07:06:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:39.340 00:26:39.340 real 0m23.118s 00:26:39.340 user 1m2.639s 00:26:39.340 sys 0m4.288s 00:26:39.340 07:06:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.340 07:06:46 -- common/autotest_common.sh@10 -- # set +x 00:26:39.340 ************************************ 00:26:39.340 END TEST nvmf_bdevperf 00:26:39.340 ************************************ 00:26:39.340 07:06:46 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:39.340 07:06:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:39.340 07:06:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:39.340 07:06:46 -- common/autotest_common.sh@10 -- # set +x 00:26:39.340 ************************************ 00:26:39.340 START TEST nvmf_target_disconnect 00:26:39.340 ************************************ 00:26:39.340 07:06:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:39.340 * Looking for test storage... 00:26:39.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:39.340 07:06:46 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.340 07:06:46 -- nvmf/common.sh@7 -- # uname -s 00:26:39.340 07:06:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.340 07:06:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.340 07:06:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.340 07:06:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.340 07:06:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.340 07:06:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.340 07:06:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.340 07:06:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.340 07:06:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.340 07:06:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.340 07:06:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:39.340 07:06:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:39.340 07:06:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.340 07:06:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.340 07:06:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.340 07:06:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.340 07:06:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.340 07:06:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.340 07:06:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.340 07:06:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.340 07:06:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.340 07:06:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.340 07:06:46 -- paths/export.sh@5 -- # export PATH 00:26:39.340 07:06:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.340 07:06:46 -- nvmf/common.sh@46 -- # : 0 00:26:39.340 07:06:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:39.340 07:06:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:39.340 07:06:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:39.340 07:06:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.340 07:06:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.340 07:06:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:39.340 07:06:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:39.340 07:06:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:39.340 07:06:46 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:39.340 07:06:46 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:39.340 07:06:46 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:39.341 07:06:46 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:26:39.341 07:06:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:39.341 07:06:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.341 07:06:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:39.341 07:06:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:39.341 07:06:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:39.341 07:06:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.341 07:06:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:39.341 07:06:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.341 07:06:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:39.341 07:06:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:39.341 07:06:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:39.341 07:06:46 -- common/autotest_common.sh@10 -- # set +x 00:26:41.243 07:06:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:41.243 07:06:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:41.243 07:06:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:41.243 07:06:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:41.243 07:06:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:41.243 07:06:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:41.243 07:06:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:41.243 07:06:48 -- nvmf/common.sh@294 -- # net_devs=() 00:26:41.243 07:06:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:41.243 07:06:48 -- nvmf/common.sh@295 -- # e810=() 00:26:41.243 07:06:48 -- nvmf/common.sh@295 -- # local -ga e810 00:26:41.243 07:06:48 -- nvmf/common.sh@296 -- # x722=() 00:26:41.243 07:06:48 -- nvmf/common.sh@296 -- # local -ga x722 00:26:41.243 07:06:48 -- nvmf/common.sh@297 -- # mlx=() 00:26:41.243 07:06:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:41.243 07:06:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.243 07:06:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.243 07:06:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.243 07:06:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.243 07:06:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.243 07:06:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.243 07:06:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.243 07:06:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.243 07:06:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.243 07:06:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.244 07:06:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.244 07:06:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:41.244 07:06:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:41.244 07:06:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:41.244 07:06:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:41.244 07:06:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:41.244 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:41.244 07:06:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:41.244 07:06:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:41.244 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:41.244 07:06:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:41.244 07:06:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:41.244 07:06:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.244 07:06:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:41.244 07:06:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.244 07:06:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:41.244 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:41.244 07:06:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.244 07:06:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:41.244 07:06:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.244 07:06:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:41.244 07:06:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.244 07:06:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:41.244 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:41.244 07:06:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.244 07:06:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:41.244 07:06:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:41.244 07:06:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:41.244 07:06:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.244 07:06:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.244 07:06:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.244 07:06:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:41.244 07:06:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.244 07:06:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.244 07:06:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:41.244 07:06:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.244 07:06:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.244 07:06:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:41.244 07:06:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:41.244 07:06:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.244 07:06:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.244 07:06:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.244 07:06:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.244 07:06:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:41.244 07:06:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.244 07:06:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.244 07:06:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.244 07:06:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:41.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:26:41.244 00:26:41.244 --- 10.0.0.2 ping statistics --- 00:26:41.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.244 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:26:41.244 07:06:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:26:41.244 00:26:41.244 --- 10.0.0.1 ping statistics --- 00:26:41.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.244 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:26:41.244 07:06:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.244 07:06:48 -- nvmf/common.sh@410 -- # return 0 00:26:41.244 07:06:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:41.244 07:06:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.244 07:06:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:41.244 07:06:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.244 07:06:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:41.244 07:06:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:41.502 07:06:48 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:41.502 07:06:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:41.503 07:06:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:41.503 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:26:41.503 ************************************ 00:26:41.503 START TEST nvmf_target_disconnect_tc1 00:26:41.503 ************************************ 00:26:41.503 07:06:48 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:26:41.503 07:06:48 -- host/target_disconnect.sh@32 -- # set +e 00:26:41.503 07:06:48 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.503 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.503 [2024-05-12 07:06:48.466344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.503 [2024-05-12 07:06:48.466619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.503 [2024-05-12 07:06:48.466652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x89b920 with addr=10.0.0.2, port=4420 00:26:41.503 [2024-05-12 07:06:48.466704] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:41.503 [2024-05-12 07:06:48.466730] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:41.503 [2024-05-12 07:06:48.466746] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:41.503 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:41.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:41.503 Initializing NVMe Controllers 00:26:41.503 07:06:48 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:41.503 07:06:48 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:41.503 07:06:48 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:26:41.503 07:06:48 -- common/autotest_common.sh@1132 -- # return 0 00:26:41.503 07:06:48 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:41.503 07:06:48 -- host/target_disconnect.sh@41 -- # set -e 00:26:41.503 00:26:41.503 real 0m0.091s 00:26:41.503 user 0m0.038s 00:26:41.503 sys 0m0.051s 00:26:41.503 07:06:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:41.503 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:26:41.503 ************************************ 00:26:41.503 END TEST nvmf_target_disconnect_tc1 00:26:41.503 ************************************ 00:26:41.503 07:06:48 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:41.503 07:06:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:41.503 07:06:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:41.503 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:26:41.503 ************************************ 00:26:41.503 START TEST nvmf_target_disconnect_tc2 00:26:41.503 ************************************ 00:26:41.503 07:06:48 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:26:41.503 07:06:48 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:41.503 07:06:48 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:41.503 07:06:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:41.503 07:06:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:41.503 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:26:41.503 07:06:48 -- nvmf/common.sh@469 -- # nvmfpid=3147212 00:26:41.503 07:06:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:41.503 07:06:48 -- nvmf/common.sh@470 -- # waitforlisten 3147212 00:26:41.503 07:06:48 -- common/autotest_common.sh@819 -- # '[' -z 3147212 ']' 00:26:41.503 07:06:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.503 07:06:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:41.503 07:06:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.503 07:06:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:41.503 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:26:41.503 [2024-05-12 07:06:48.554363] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:41.503 [2024-05-12 07:06:48.554443] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.503 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.503 [2024-05-12 07:06:48.624540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.763 [2024-05-12 07:06:48.741734] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:41.763 [2024-05-12 07:06:48.741883] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.763 [2024-05-12 07:06:48.741901] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.763 [2024-05-12 07:06:48.741913] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.763 [2024-05-12 07:06:48.741996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:41.763 [2024-05-12 07:06:48.742054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:41.763 [2024-05-12 07:06:48.742120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:41.763 [2024-05-12 07:06:48.742122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:42.698 07:06:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:42.698 07:06:49 -- common/autotest_common.sh@852 -- # return 0 00:26:42.699 07:06:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:42.699 07:06:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:42.699 07:06:49 -- common/autotest_common.sh@10 -- # set +x 00:26:42.699 07:06:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.699 07:06:49 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:42.699 07:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.699 07:06:49 -- common/autotest_common.sh@10 -- # set +x 00:26:42.699 Malloc0 00:26:42.699 07:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:42.699 07:06:49 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:42.699 07:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.699 07:06:49 -- common/autotest_common.sh@10 -- # set +x 00:26:42.699 [2024-05-12 07:06:49.550155] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.699 07:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:42.699 07:06:49 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:42.699 07:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.699 07:06:49 -- common/autotest_common.sh@10 -- # set +x 00:26:42.699 07:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:42.699 07:06:49 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:42.699 07:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.699 07:06:49 -- common/autotest_common.sh@10 -- # set +x 00:26:42.699 07:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:42.699 07:06:49 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:42.699 07:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.699 07:06:49 -- common/autotest_common.sh@10 -- # set +x 00:26:42.699 [2024-05-12 07:06:49.578358] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.699 07:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:42.699 07:06:49 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:42.699 07:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:42.699 07:06:49 -- common/autotest_common.sh@10 -- # set +x 00:26:42.699 07:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:42.699 07:06:49 -- host/target_disconnect.sh@50 -- # reconnectpid=3147372 00:26:42.699 07:06:49 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:42.699 07:06:49 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:42.699 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.611 07:06:51 -- host/target_disconnect.sh@53 -- # kill -9 3147212 00:26:44.611 07:06:51 -- host/target_disconnect.sh@55 -- # sleep 2 00:26:44.611 Read completed with error (sct=0, sc=8) 00:26:44.611 starting I/O failed 00:26:44.611 Read completed with error (sct=0, sc=8) 00:26:44.611 starting I/O failed 00:26:44.611 Read completed with error (sct=0, sc=8) 00:26:44.611 starting I/O failed 00:26:44.611 Read completed with error (sct=0, sc=8) 00:26:44.611 starting I/O failed 00:26:44.611 Read completed with error (sct=0, sc=8) 00:26:44.611 starting I/O failed 00:26:44.611 Read completed with error (sct=0, sc=8) 00:26:44.611 starting I/O failed 00:26:44.611 Read completed with error (sct=0, sc=8) 00:26:44.611 starting I/O failed 00:26:44.611 Read completed with error (sct=0, sc=8) 00:26:44.611 starting I/O failed 00:26:44.611 Read completed with error (sct=0, sc=8) 00:26:44.611 starting I/O failed 00:26:44.611 Read completed with error (sct=0, sc=8) 00:26:44.611 starting I/O failed 00:26:44.611 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 [2024-05-12 07:06:51.604148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 [2024-05-12 07:06:51.604553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Write completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 Read completed with error (sct=0, sc=8) 00:26:44.612 starting I/O failed 00:26:44.612 [2024-05-12 07:06:51.604915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.612 [2024-05-12 07:06:51.605165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.605394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.605422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.612 qpair failed and we were unable to recover it. 00:26:44.612 [2024-05-12 07:06:51.605595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.605843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.605871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.612 qpair failed and we were unable to recover it. 00:26:44.612 [2024-05-12 07:06:51.606036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.606252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.606286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.612 qpair failed and we were unable to recover it. 00:26:44.612 [2024-05-12 07:06:51.606507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.606767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.606795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.612 qpair failed and we were unable to recover it. 00:26:44.612 [2024-05-12 07:06:51.606951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.607257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.607284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.612 qpair failed and we were unable to recover it. 00:26:44.612 [2024-05-12 07:06:51.607489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.607647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.607674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.612 qpair failed and we were unable to recover it. 00:26:44.612 [2024-05-12 07:06:51.607847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.608042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.608068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.612 qpair failed and we were unable to recover it. 00:26:44.612 [2024-05-12 07:06:51.608441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.612 [2024-05-12 07:06:51.608650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.608690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.608857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.609023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.609049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.609257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.609530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.609555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.609856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.610010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.610037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.610191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.610430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.610472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.610659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.610853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.610885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.611056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.611247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.611274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.611533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.611785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.611812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.611999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.612304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.612330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.612563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.612773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.612801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.612997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.613244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.613269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.613525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.613806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.613834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.613999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.614257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.614283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.614478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.614650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.614677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Read completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 Write completed with error (sct=0, sc=8) 00:26:44.613 starting I/O failed 00:26:44.613 [2024-05-12 07:06:51.615028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:44.613 [2024-05-12 07:06:51.615311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.615578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.615626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.615815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.615992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.616033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.616328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.616640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.616689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.616913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.617110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.617136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.617362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.617654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.617702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.617905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.618084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.618111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.618363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.618516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.618548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.613 qpair failed and we were unable to recover it. 00:26:44.613 [2024-05-12 07:06:51.618765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.618912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.613 [2024-05-12 07:06:51.618939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.619212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.619398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.619424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.619664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.619838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.619866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.620050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.620259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.620289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.620491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.620685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.620719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.621863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.622076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.622102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.622278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.622474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.622504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.622718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.622918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.622945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.623137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.623283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.623310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.623536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.623720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.623747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.623965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.624210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.624251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.624437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.624666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.624701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.624929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.625192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.625222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.625494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.625703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.625730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.625912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.626127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.626153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.626353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.626527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.626554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.626768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.626974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.627005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.627202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.627372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.627399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.627640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.627829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.627857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.628053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.628234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.628260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.628476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.628669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.628704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.628923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.629140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.629166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.629358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.629541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.629572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.629826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.630060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.630089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.630326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.630540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.630566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.630800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.631033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.631060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.631266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.631491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.631520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.631757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.631936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.631963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.632179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.632403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.632432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.632658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.632874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.614 [2024-05-12 07:06:51.632900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.614 qpair failed and we were unable to recover it. 00:26:44.614 [2024-05-12 07:06:51.633119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.633347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.633373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.633563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.633774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.633802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.634030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.634249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.634275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.634495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.634693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.634740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.634947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.635155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.635181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.635417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.635596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.635623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.635814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.635969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.636009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.636224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.636471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.636497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.636719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.636899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.636926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.637360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.637664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.637689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.637891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.638146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.638172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.638380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.638674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.638741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.638969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.639201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.639230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.639467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.639674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.639708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.639893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.640081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.640112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.640353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.640567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.640594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.640810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.641023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.641049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.641315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.641494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.641520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.641708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.641940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.641967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.642185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.642388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.642414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.642723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.643026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.643052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.643197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.643350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.643376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.643563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.643775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.643802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.644022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.644221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.644246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.644446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.644664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.644693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.644917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.645142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.645172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.645469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.645646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.645673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.645893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.646116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.646141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.615 qpair failed and we were unable to recover it. 00:26:44.615 [2024-05-12 07:06:51.646355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.615 [2024-05-12 07:06:51.646552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.646581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.646792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.646999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.647028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.647196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.647413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.647456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.647707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.647919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.647946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.648116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.648327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.648353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.648661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.648845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.648871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.649106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.649300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.649329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.649537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.649750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.649777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.649981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.650233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.650273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.650468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.650702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.650728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.650948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.651115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.651140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.651287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.651493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.651519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.651760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.651936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.651962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.652179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.652411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.652440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.652661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.652871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.652897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.653068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.653222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.653265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.653481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.653691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.653729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.653947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.654130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.654157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.654328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.654561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.654590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.654822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.655004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.655030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.655229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.655446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.655472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.655659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.655829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.655856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.656073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.656267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.616 [2024-05-12 07:06:51.656292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.616 qpair failed and we were unable to recover it. 00:26:44.616 [2024-05-12 07:06:51.656492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.656711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.656737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.656929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.657154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.657179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.657392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.657617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.657642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.657908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.658099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.658124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.658365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.658544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.658583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.658784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.659020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.659045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.659247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.659460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.659485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.659684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.659885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.659913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.660110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.660288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.660314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.660554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.660748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.660777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.660972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.661138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.661168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.661370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.661558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.661583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.661789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.662039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.662067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.662287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.662493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.662520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.662741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.662913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.662939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.663138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.663417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.663443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.663640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.663814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.663840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.663991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.664167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.664208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.664423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.664639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.664664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.664919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.665214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.665240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.665429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.665710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.665736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.665890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.666070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.666099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.666317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.666532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.666558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.666796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.667002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.667028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.667206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.667382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.667407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.667619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.667804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.667831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.667998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.668200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.668225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.668430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.668651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.668676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.668891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.669113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.669139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.617 qpair failed and we were unable to recover it. 00:26:44.617 [2024-05-12 07:06:51.669324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.617 [2024-05-12 07:06:51.669544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.669571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.669846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.670024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.670049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.670248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.670450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.670479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.670680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.670872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.670898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.671102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.671269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.671297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.671515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.671755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.671784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.671989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.672215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.672244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.672443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.672608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.672636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.672800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.672980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.673006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.673286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.673544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.673584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.673800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.674003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.674032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.674238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.674460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.674486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.674669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.674903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.674932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.675133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.675276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.675315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.675516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.675725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.675767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.675960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.676178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.676204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.676408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.676586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.676612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.676771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.676978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.677007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.677233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.677401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.677444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.677676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.677846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.677874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.678064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.678236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.678269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.678439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.678632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.678669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.678884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.679078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.679107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.679298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.679458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.679488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.679709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.679870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.679911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.680113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.680346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.680372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.680588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.680768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.680794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.681002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.681148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.681189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.618 qpair failed and we were unable to recover it. 00:26:44.618 [2024-05-12 07:06:51.681416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.618 [2024-05-12 07:06:51.681645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.681674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.681859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.682057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.682081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.682282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.682484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.682514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.682721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.682970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.683024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.683255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.683462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.683487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.683676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.683906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.683932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.684106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.684342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.684367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.684580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.684827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.684852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.685119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.685304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.685342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.685529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.685728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.685754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.685933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.686167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.686192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.686374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.686520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.686545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.686707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.686891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.686922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.687079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.687254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.687297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.687469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.687657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.687685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.687862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.688039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.688065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.688229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.688441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.688486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.688688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.688885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.688911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.689184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.689480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.689506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.689679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.689910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.689937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.690162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.690365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.690391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.690594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.690781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.690808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.690984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.691189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.691219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.691398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.691707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.691736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.691913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.692174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.692203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.692409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.692609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.692638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.692832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.693024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.693053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.693283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.693491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.693517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.693671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.693861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.693888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.619 [2024-05-12 07:06:51.694098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.694291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.619 [2024-05-12 07:06:51.694318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.619 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.694534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.694728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.694764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.694945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.695185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.695215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.695426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.695654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.695684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.695896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.696075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.696100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.696329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.696505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.696530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.696776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.696937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.696963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.697219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.697453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.697482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.697677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.697942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.697968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.698218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.698405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.698452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.698675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.698861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.698887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.699092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.699286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.699317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.699495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.699715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.699755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.699965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.700168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.700193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.700428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.700641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.700670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.700912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.701123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.701153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.701374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.701576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.701602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.701834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.702007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.702033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.702241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.702472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.702498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.702735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.702934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.702959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.703147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.703389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.703415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.703627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.703841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.703867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.704055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.704263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.704289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.704450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.704656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.704684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.704923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.705154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.705180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.705415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.705711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.705736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.705959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.706230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.706254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.706438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.706641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.706670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.706948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.707154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.707179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.620 qpair failed and we were unable to recover it. 00:26:44.620 [2024-05-12 07:06:51.707437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.620 [2024-05-12 07:06:51.707660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.707708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.707889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.708098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.708124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.708323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.708510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.708536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.708757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.708944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.708988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.709177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.709374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.709403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.709635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.709837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.709863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.710157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.710410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.710457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.710659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.710833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.710864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.711124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.711324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.711350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.711586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.711820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.711846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.711998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.712203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.712244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.712445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.712643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.712668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.712869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.713067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.713096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.713288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.713480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.713509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.713749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.713917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.713959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.714156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.714312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.714341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.714518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.714666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.714719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.714924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.715095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.715124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.715356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.715585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.715614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.715816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.716098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.716127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.716329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.716594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.716619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.716786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.716968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.717008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.717195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.717393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.717422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.717616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.717859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.717890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.621 qpair failed and we were unable to recover it. 00:26:44.621 [2024-05-12 07:06:51.718185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.621 [2024-05-12 07:06:51.718479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.718532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.718749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.718971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.719000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.719193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.719398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.719424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.719620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.719835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.719864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.720020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.720192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.720221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.720412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.720673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.720709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.720904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.721093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.721122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.721342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.721507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.721537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.721799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.721965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.721990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.722217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.722516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.722542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.722808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.723124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.723152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.723355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.723556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.723581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.723748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.723942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.723968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.724186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.724448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.724477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.724720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.724916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.724942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.725162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.725405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.725445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.725686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.725881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.725910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.726133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.726306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.726332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.726546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.726777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.726804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.727057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.727260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.727286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.727429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.727662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.727691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.727945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.728131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.728163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.728363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.728557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.728586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.728793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.728983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.729010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.729254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.729465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.729490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.729691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.729907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.729932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.730104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.730301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.730326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.730511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.730796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.730823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.731011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.731215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.731240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.622 [2024-05-12 07:06:51.731450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.731658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.622 [2024-05-12 07:06:51.731687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.622 qpair failed and we were unable to recover it. 00:26:44.623 [2024-05-12 07:06:51.731891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.623 [2024-05-12 07:06:51.732044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.623 [2024-05-12 07:06:51.732084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.623 qpair failed and we were unable to recover it. 00:26:44.623 [2024-05-12 07:06:51.732273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.623 [2024-05-12 07:06:51.732471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.623 [2024-05-12 07:06:51.732496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.623 qpair failed and we were unable to recover it. 00:26:44.623 [2024-05-12 07:06:51.732677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.623 [2024-05-12 07:06:51.732859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.623 [2024-05-12 07:06:51.732885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.623 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.733042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.733249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.733276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.733486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.733656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.733684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.733918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.734107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.734135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.734458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.734707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.734734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.734900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.735051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.735076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.735281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.735655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.735713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.735931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.736182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.736228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.736599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.736800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.736827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.736988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.737177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.737230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.737519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.737760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.737787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.737968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.738160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.738185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.738410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.738621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.738647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.738856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.739038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.739080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.739370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.739584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.739614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.739848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.740052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.740078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.740284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.740555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.740606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.740794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.740974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.741001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.741237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.741516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.741573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.741782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.741981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.742055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.742295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.742645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.742703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.742899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.743077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.743103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.743285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.743644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.743703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.743916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.744159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.744208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.744558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.744786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.744812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.896 [2024-05-12 07:06:51.744969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.745165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-05-12 07:06:51.745194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.896 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.745512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.745745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.745772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.745971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.746262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.746290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.746524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.746706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.746744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.746900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.747047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.747088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.747327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.747504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.747531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.747741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.747920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.747949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.748159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.748333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.748375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.748575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.748797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.748827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.749068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.749248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.749276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.749469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.749655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.749683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.749888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.750075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.750101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.750289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.750448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.750474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.750676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.750830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.750857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.751042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.751214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.751243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.751476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.751675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.751710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.751919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.752121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.752149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.752356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.752530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.752556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.752743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.752943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.752972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.753167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.753382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.753428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.753660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.753841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.753868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.754020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.754246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.754307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.754592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.754797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.754823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.755008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.755294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.755319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.755491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.755633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.755659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.755851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.756063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.756089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.756323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.756564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.756618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.756830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.757058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.757084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.757259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.757458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.757486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.757712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.757880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.757909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.897 qpair failed and we were unable to recover it. 00:26:44.897 [2024-05-12 07:06:51.758112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-05-12 07:06:51.758305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.758371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.758573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.758777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.758807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.759009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.759187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.759214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.759411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.759606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.759636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.759835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.760034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.760060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.760278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.760437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.760467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.760621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.760771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.760798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.761011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.761237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.761283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.761513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.761766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.761793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.761997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.762180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.762206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.762409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.762564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.762590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.762777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.763010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.763036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.763208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.763353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.763379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.763568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.763768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.763797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.764003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.764234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.764263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.764461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.764654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.764687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.764939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.765115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.765141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.765315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.765643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.765694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.765909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.766079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.766107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.766328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.766528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.766554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.766731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.766919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.766948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.767172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.767375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.767400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.767557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.767761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.767788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.768033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.768266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.768295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.768603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.768833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.768864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.769059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.769232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.769262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.769492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.769687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.769722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.769952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.770248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.770303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.770564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.770771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.770798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.770984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.771185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-05-12 07:06:51.771232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.898 qpair failed and we were unable to recover it. 00:26:44.898 [2024-05-12 07:06:51.771430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.771730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.771757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.771970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.772201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.772227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.772437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.772669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.772709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.772918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.773116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.773145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.773377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.773606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.773635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.773866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.774015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.774041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.774257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.774606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.774656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.774876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.775034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.775078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.775318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.775482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.775511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.775708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.775932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.775960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.776152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.776347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.776373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.776581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.776758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.776787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.777020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.777237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.777266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.777463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.777737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.777768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.777971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.778267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.778292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.778499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.778675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.778721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.778935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.779120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.779146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.779321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.779526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.779566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.779817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.779996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.780022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.780246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.780459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.780484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.780710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.780908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.780934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.781112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.781290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.781331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.781642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.781896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.781924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.782125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.782374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.782399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.782587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.782863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.782894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.783108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.783400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.783426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.783613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.783795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.783844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.784050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.784275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.784304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.784476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.784744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.784774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.899 [2024-05-12 07:06:51.785003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.785231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.899 [2024-05-12 07:06:51.785258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.899 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.785475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.785703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.785732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.785933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.786159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.786208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.786527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.786764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.786791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.787060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.787260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.787285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.787499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.787670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.787706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.787941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.788167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.788195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.788371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.788561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.788590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.788832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.789002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.789032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.789280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.789496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.789521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.789800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.790015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.790041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.790294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.790529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.790557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.790758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.790928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.790956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.791195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.791354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.791379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.791573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.791798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.791825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.792005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.792325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.792350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.792606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.792830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.792859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.793085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.793334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.793359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.793547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.793821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.793851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.794063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.794404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.794456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.794700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.794878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.794904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.795118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.795328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.795385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.795622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.795879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.795909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.796083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.796286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.796312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.796537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.796770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.796796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.900 qpair failed and we were unable to recover it. 00:26:44.900 [2024-05-12 07:06:51.797018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.900 [2024-05-12 07:06:51.797221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.797246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.797427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.797588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.797615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.797883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.798087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.798128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.798378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.798604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.798633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.798870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.799059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.799088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.799281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.799628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.799689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.799922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.800118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.800147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.800349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.800630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.800685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.800897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.801113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.801140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.801318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.801492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.801519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.801693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.801889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.801915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.802158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.802357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.802386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.802580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.802786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.802813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.803017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.803218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.803248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.803449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.803626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.803653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.803849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.804032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.804059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.804263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.804491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.804518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.804686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.804936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.804965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.805202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.805383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.805410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.805641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.805848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.805875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.806084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.806287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.806313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.806519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.806713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.806743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.806945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.807137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.807164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.807360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.807587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.807617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.807822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.808051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.808078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.808253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.808486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.808513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.808681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.808883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.808909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.809130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.809334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.809361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.809546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.809825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.809853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.901 [2024-05-12 07:06:51.810056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.810297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.901 [2024-05-12 07:06:51.810324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.901 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.810512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.810714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.810745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.810940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.811138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.811165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.811343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.811509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.811535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.811725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.811937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.811970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.812197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.812493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.812545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.812755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.812929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.812955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.813224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.813497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.813525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.813739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.814005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.814053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.814257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.814458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.814487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.814707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.814912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.814940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.815282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.815648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.815723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.815961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.816194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.816220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.816403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.816610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.816637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.816868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.817050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.817076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.817282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.817556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.817582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.817764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.817985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.818021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.818252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.818403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.818429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.818606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.818787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.818831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.819031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.819281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.819332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.819551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.819702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.819728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.819971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.820122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.820148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.820355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.820545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.820574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.820764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.820936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.820965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.821165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.821404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.821430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.821608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.821810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.821840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.822062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.822291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.822338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.822558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.822724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.822760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.822991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.823285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.823341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.823571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.823777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.823806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.902 [2024-05-12 07:06:51.824000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.824303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.902 [2024-05-12 07:06:51.824357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.902 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.824553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.824754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.824780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.824954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.825134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.825161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.825341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.825536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.825565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.825777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.825969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.825999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.826327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.826547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.826576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.826803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.827000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.827029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.827250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.827486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.827515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.827713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.827896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.827925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.828160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.828421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.828447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.828651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.828846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.828875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.829082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.829259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.829289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.829522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.829693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.829733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.829905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.830067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.830095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.830334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.830522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.830551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.830746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.830906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.830933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.831109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.831289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.831335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.831534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.831761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.831790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.832021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.832207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.832233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.832436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.832616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.832645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.832865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.833111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.833153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.833341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.833528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.833557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.833761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.833979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.834012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.834215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.834433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.834459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.834658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.834832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.834859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.835034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.835330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.835391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.835593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.835795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.835826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.836046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.836243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.836270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.836514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.836758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.836785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.837090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.837280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.837309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.903 qpair failed and we were unable to recover it. 00:26:44.903 [2024-05-12 07:06:51.837508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.903 [2024-05-12 07:06:51.837712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.837749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.837952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.838369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.838398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.838635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.838837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.838864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.839050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.839244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.839272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.839427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.839612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.839640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.839823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.840022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.840052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.840255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.840455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.840484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.840684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.840927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.840956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.841150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.841396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.841456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.841672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.841895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.841924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.842154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.842415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.842468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.842662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.842870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.842899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.843068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.843274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.843300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.843451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.843678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.843711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.843904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.844103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.844132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.844385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.844583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.844612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.844842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.845053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.845079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.845284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.845568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.845621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.845844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.846129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.846158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.846361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.846575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.846602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.846841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.847202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.847259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.847448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.847633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.847674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.847879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.848025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.848066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.848431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.848664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.848703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.848914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.849065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.849092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.849359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.849547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.849576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.849808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.850004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.850033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.850222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.850545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.850598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.850826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.851012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.851039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.851217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.851567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.904 [2024-05-12 07:06:51.851596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.904 qpair failed and we were unable to recover it. 00:26:44.904 [2024-05-12 07:06:51.851790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.851945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.851971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.852179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.852450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.852476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.852682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.852936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.852966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.853205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.853404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.853433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.853639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.853885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.853915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.854111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.854402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.854456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.854671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.854894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.854925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.855132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.855301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.855330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.855529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.855728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.855758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.855950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.856121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.856164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.856325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.856493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.856522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.856719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.856926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.856955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.857156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.857370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.857411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.857609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.857789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.857819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.858016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.858297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.858356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.858553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.858730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.858774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.859004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.859183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.859213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.859471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.859669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.859707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.859920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.860204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.860270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.860444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.860642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.860673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.860912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.861126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.861158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.861361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.861533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.861560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.861789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.862017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.862050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.862278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.862484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.862514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.862753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.862935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.862962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.905 qpair failed and we were unable to recover it. 00:26:44.905 [2024-05-12 07:06:51.863159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.905 [2024-05-12 07:06:51.863326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.863356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.863563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.863785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.863820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.864027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.864224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.864253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.864427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.864652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.864681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.864876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.865051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.865082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.865259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.865459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.865488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.865691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.865885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.865913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.866149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.866323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.866350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.866566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.866776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.866803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.866985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.867223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.867252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.867441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.867652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.867682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.867872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.868052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.868087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.868257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.868482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.868512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.868694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.868873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.868901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.869090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.869249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.869278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.869461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.869657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.869687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.869895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.870123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.870153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.870341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.870509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.870538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.870764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.870942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.870969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.871191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.871392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.871419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.871595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.871807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.871836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.872046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.872246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.872273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.872469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.872642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.872673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.872915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.873093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.873124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.873328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.873518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.873548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.873752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.873946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.873975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.874172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.874345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.874375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.874558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.874743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.874787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.875016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.875196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.875223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.906 [2024-05-12 07:06:51.875411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.875612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.906 [2024-05-12 07:06:51.875642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.906 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.875843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.876010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.876040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.876221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.876399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.876441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.876619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.876836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.876867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.877079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.877230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.877274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.877444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.877665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.877703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.877907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.878084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.878128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.878334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.878516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.878560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.878792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.879011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.879038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.879226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.879409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.879507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.879757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.879913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.879940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.880118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.880296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.880323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.880516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.880722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.880755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.880967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.881180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.881208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.881413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.881580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.881609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.881816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.881971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.881999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.882178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.882339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.882368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.882572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.882773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.882802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.882980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.883171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.883200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.883373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.883589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.883619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.883795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.884003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.884030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.884214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.884375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.884404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.884587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.884814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.884846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.885078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.885300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.885329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.885508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.885701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.885732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.885898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.886054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.886102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.886300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.886502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.886529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.886799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.887025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.887057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.887286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.887478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.887508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.887712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.887942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.907 [2024-05-12 07:06:51.887971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.907 qpair failed and we were unable to recover it. 00:26:44.907 [2024-05-12 07:06:51.888135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.888359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.888389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.888597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.888773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.888803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.888998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.889176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.889202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.889356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.889558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.889588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.889785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.889976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.890006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.890207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.890411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.890439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.890616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.890818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.890849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.891052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.891286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.891316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.891491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.891657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.891687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.891891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.892083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.892113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.892281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.892455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.892487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.892689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.892871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.892902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.893103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.893301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.893329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.893506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.893708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.893740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.893947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.894157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.894187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.894383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.894556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.894584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.894795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.894958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.895001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.895190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.895389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.895418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.895587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.895795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.895824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.896033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.896252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.896282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.896476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.896646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.896676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.896880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.897075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.897105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.897297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.897493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.897523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.897727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.897932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.897962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.898159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.898324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.898354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.898519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.898686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.898725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.898924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.899166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.899193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.899346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.899528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.899555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.899705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.899859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.899901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.908 qpair failed and we were unable to recover it. 00:26:44.908 [2024-05-12 07:06:51.900105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.908 [2024-05-12 07:06:51.900333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.900360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.900542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.900726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.900753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.900902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.901053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.901079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.901283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.901518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.901548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.901714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.901920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.901951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.902121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.902322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.902352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.902587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.902765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.902793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.902995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.903170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.903199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.903370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.903565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.903594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.903787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.903977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.904007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.904208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.904365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.904392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.904544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.904751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.904779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.904934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.905179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.905206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.905412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.905596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.905628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.905858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.906037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.906067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.906274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.906429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.906456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.906651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.906825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.906855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.907058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.907281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.907310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.907504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.907660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.907687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.907857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.908033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.908063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.908261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.908437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.908484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.908689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.908877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.908907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.909076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.909219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.909245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.909453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.909644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.909671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.909857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.910053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.909 [2024-05-12 07:06:51.910081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.909 qpair failed and we were unable to recover it. 00:26:44.909 [2024-05-12 07:06:51.910276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.910471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.910495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.910675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.910938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.910969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.911179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.911408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.911435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.911666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.911877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.911907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.912103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.912295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.912326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.912496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.912670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.912705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.912910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.913075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.913105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.913331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.913529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.913559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.913753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.913921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.913951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.914146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.914320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.914349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.914572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.914775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.914808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.915009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.915196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.915223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.915376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.915594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.915623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.915830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.916012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.916040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.916218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.916385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.916415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.916582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.916776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.916816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.917039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.917206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.917236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.917408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.917603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.917632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.917833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.918014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.918045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.918306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.918461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.918488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.918667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.918916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.918947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.919177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.919353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.919383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.919579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.919780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.919811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.920033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.920223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.920253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.920421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.920589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.920618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.920854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.921041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.921068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.921217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.921460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.921486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.921665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.921854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.921883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.922098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.922293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.922322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.910 qpair failed and we were unable to recover it. 00:26:44.910 [2024-05-12 07:06:51.922519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.922679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.910 [2024-05-12 07:06:51.922723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.922907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.923110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.923140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.923311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.923489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.923520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.923747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.923974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.924001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.924175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.924382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.924412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.924639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.924819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.924850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.925044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.925240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.925270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.925477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.925669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.925705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.925934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.926107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.926136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.926340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.926498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.926525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.926723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.926950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.926984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.927181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.927390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.927417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.927595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.927776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.927805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.927990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.928156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.928186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.928376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.928547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.928580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.928812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.929021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.929051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.929233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.929389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.929435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.929667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.929879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.929909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.930082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.930275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.930305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.930476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.930674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.930721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.930924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.931088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.931122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.931344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.931552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.931579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.931781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.931987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.932018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.932214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.932435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.932465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.932650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.932818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.932849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.933046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.933240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.933269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.933472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.933654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.933685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.933927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.934086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.934131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.934342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.934499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.934526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.934757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.934970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.911 [2024-05-12 07:06:51.934999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.911 qpair failed and we were unable to recover it. 00:26:44.911 [2024-05-12 07:06:51.935195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.935390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.935425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.935625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.935848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.935879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.936081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.936263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.936290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.936498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.936666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.936703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.936876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.937042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.937072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.937278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.937509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.937538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.937747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.937950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.937980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.938188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.938412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.938442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.938662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.938864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.938894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.939054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.939225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.939254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.939421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.939613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.939642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.939892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.940059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.940088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.940293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.940517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.940546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.940715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.940919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.940950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.941119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.941266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.941310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.941485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.941675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.941710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.941890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.942087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.942118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.942324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.942517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.942547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.942729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.942929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.942959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.943191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.943399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.943433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.943659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.943856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.943884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.944039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.944227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.944258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.944460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.944642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.944686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.944900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.945060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.945087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.945302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.945639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.945703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.945881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.946106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.946158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.946341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.946491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.946532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.946732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.946932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.946960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.947202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.947372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.947401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.912 qpair failed and we were unable to recover it. 00:26:44.912 [2024-05-12 07:06:51.947626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.947785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.912 [2024-05-12 07:06:51.947812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.947969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.948119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.948161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.948402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.948591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.948620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.948833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.949016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.949043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.949242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.949531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.949584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.949835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.949990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.950017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.950220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.950370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.950396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.950545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.950749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.950776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.950954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.951141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.951167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.951337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.951513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.951539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.951718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.951924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.951951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.952187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.952381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.952412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.952617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.952787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.952817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.953016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.953240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.953269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.953465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.953664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.953692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.953903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.954079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.954106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.954268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.954498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.954547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.954774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.954950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.954979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.955205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.955398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.955427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.955630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.955822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.955849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.956003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.956177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.956203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.956401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.956627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.956656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.956850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.957058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.957085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.957244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.957395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.957421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.957570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.957723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.957750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.957901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.958054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.958080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.958306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.958500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.958527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.958731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.958917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.958946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.913 [2024-05-12 07:06:51.959139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.959404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.913 [2024-05-12 07:06:51.959455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.913 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.959680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.959915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.959944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.960175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.960344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.960371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.960545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.960706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.960734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.960912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.961076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.961112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.961293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.961533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.961572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.961811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.962012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.962039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.962226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.962482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.962531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.962718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.962896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.962938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.963145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.963369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.963424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.963597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.963824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.963854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.964024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.964249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.964275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.964457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.964634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.964660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.964841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.964987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.965013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.965184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.965384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.965411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.965600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.965811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.965841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.966018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.966259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.966286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.966460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.966651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.966681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.966888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.967064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.967092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.967287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.967490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.967519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.967721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.967922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.967952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.968130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.968309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.968354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.968553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.968761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.968788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.969012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.969258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.969308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.969477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.969706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.969735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.969927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.970131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.970157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.970334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.970545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.970573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.914 [2024-05-12 07:06:51.970783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.970992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.914 [2024-05-12 07:06:51.971020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.914 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.971231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.971381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.971407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.971615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.971822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.971852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.972053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.972302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.972329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.972533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.972710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.972739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.972905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.973094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.973123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.973296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.973486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.973512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.973685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.973859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.973888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.974123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.974291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.974321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.974524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.974746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.974775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.974960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.975113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.975158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.975362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.975592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.975621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.975834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.976038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.976065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.976243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.976399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.976426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.976638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.976805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.976832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.976982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.977163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.977189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.977340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.977515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.977541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.977749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.977911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.977937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.978122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.978293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.978324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.978527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.978702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.978747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.978888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.979055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.979084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.979255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.979477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.979525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.979711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.979869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.979895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.980108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.980305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.980353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.980515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.980715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.980759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.980965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.981240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.981288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.981543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.981772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.981797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.981944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.982187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.982236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.982483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.982689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.982751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.982911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.983133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.983159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.915 qpair failed and we were unable to recover it. 00:26:44.915 [2024-05-12 07:06:51.983391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.983613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.915 [2024-05-12 07:06:51.983640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.983801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.983979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.984008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.984201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.984418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.984447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.984649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.984797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.984823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.985004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.985236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.985266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.985469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.985691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.985744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.985927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.986153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.986202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.986433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.986609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.986639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.986845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.987001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.987028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.987228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.987393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.987423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.987621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.987798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.987826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.987984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.988167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.988215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.988429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.988643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.988673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.988863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.989084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.989123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.989379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.989610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.989639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.989827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.990007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.990036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.990234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.990396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.990425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.990627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.990820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.990846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.991009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.991215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.991264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.991502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.991678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.991715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.991890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.992086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.992116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.992311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.992510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.992559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.992771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.992921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.992947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.993176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.993369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.993398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.993598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.993806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.993833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.994012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.994175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.994205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.994438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.994632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.994658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.994828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.994982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.995008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.995220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.995505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.995552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.995753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.995937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.995988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.916 qpair failed and we were unable to recover it. 00:26:44.916 [2024-05-12 07:06:51.996161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.916 [2024-05-12 07:06:51.996349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.996378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:51.996570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.996760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.996786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:51.996934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.997150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.997178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:51.997380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.997607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.997636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:51.997822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.997973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.998017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:51.998216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.998426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.998472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:51.998777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.998937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.998963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:51.999111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.999284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.999331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:51.999527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.999725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:51.999767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:51.999950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.000156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.000183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.000392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.000605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.000635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.000821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.000972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.000998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.001176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.001403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.001446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.001667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.001861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.001887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.002043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.002244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.002273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.002473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.002673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.002707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.002883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.003085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.003115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.003277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.003500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.003549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.003715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.003908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.003934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.004149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.004300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.004348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.004571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.004758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.004784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.004943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.005156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.005185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.005376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.005585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.005612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.005771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.005929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.005955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.006151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.006384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.006419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.006627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.006844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.006871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.007048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.007229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.007258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.007464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.007638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.007665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.007869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.008067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.008096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.008292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.008463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.917 [2024-05-12 07:06:52.008497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.917 qpair failed and we were unable to recover it. 00:26:44.917 [2024-05-12 07:06:52.008708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.918 [2024-05-12 07:06:52.008881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.918 [2024-05-12 07:06:52.008909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.918 qpair failed and we were unable to recover it. 00:26:44.918 [2024-05-12 07:06:52.009091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.918 [2024-05-12 07:06:52.009265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.918 [2024-05-12 07:06:52.009295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.918 qpair failed and we were unable to recover it. 00:26:44.918 [2024-05-12 07:06:52.009495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.918 [2024-05-12 07:06:52.009670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.918 [2024-05-12 07:06:52.009706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.918 qpair failed and we were unable to recover it. 00:26:44.918 [2024-05-12 07:06:52.009886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.918 [2024-05-12 07:06:52.010041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.918 [2024-05-12 07:06:52.010067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.918 qpair failed and we were unable to recover it. 00:26:44.918 [2024-05-12 07:06:52.010262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.918 [2024-05-12 07:06:52.010414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.918 [2024-05-12 07:06:52.010441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:44.918 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.010667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.010850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.010881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.011086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.011288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.011315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.011543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.011732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.011784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.011959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.012166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.012195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.012393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.012588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.012617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.012827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.012993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.013022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.013212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.013369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.013398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.013599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.013780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.013809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.013985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.014175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.014204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.014424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.014578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.014607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.014808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.014969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.015002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.015198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.015371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.015398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.015575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.015805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.015832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.016066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.016264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.016290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.016520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.016716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.016754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.016950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.017176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.017205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.017383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.017554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.017585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.017786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.017985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.018015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.018242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.018460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.018506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.018709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.018923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.018951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.019140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.019312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.019341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.019503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.019707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.019748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.019909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.020118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.020149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.020354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.020582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.020611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.020842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.021011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.021050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.021230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.021456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.021486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.021688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.021928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.021967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.022134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.022314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.022341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.022521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.022747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.022776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.022974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.023206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.023233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.023463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.023641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.023667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.023827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.024031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.024062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.024254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.024423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.024453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.024629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.024823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.024853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.025024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.025220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.025250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.025431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.025585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.025629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.025823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.026023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.026053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.026284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.026463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.026489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.026686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.026900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.026930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.027102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.027254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.027280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.027508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.027671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.027707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.027932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.028131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.191 [2024-05-12 07:06:52.028178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.191 qpair failed and we were unable to recover it. 00:26:45.191 [2024-05-12 07:06:52.028353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.028531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.028574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.028771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.029020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.029070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.029273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.029446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.029475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.029647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.029847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.029881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.030055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.030270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.030320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.030557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.030765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.030794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.030968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.031129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.031159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.031361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.031514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.031556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.031752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.031953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.031980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.032187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.032359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.032388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.032577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.032765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.032795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.033004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.033210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.033239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.033457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.033630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.033660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.033855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.034032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.034062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.034257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.034424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.034453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.034640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.034809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.034838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.035067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.035221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.035248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.035455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.035683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.035717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.035889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.036109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.036137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.036348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.036573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.036619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.036875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.037087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.037117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.037314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.037497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.037523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.037731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.037931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.037956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.038117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.038288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.038314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.038526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.038721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.038750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.038953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.039125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.039152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.039384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.039588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.039615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.039815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.039978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.040007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.040202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.040359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.040387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.040582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.040762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.040792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.041000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.041156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.041185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.041354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.041546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.041575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.041743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.041941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.041969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.042140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.042318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.042344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.042544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.042733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.042763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.042995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.043170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.043197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.043361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.043552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.043580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.043788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.043988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.044017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.044248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.044450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.044476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.044709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.044904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.044933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.045110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.045275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.045317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.045492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.045647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.045691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.045863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.046041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.046068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.046270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.046494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.046547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.046769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.046947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.046975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.047183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.047351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.047380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.047590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.047753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.047796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.048008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.048288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.048316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.048508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.048735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.048762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.048915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.049066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.049091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.049262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.049436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.049461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.192 [2024-05-12 07:06:52.049663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.049856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.192 [2024-05-12 07:06:52.049885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.192 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.050053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.050257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.050283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.050474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.050674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.050721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.050889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.051077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.051110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.051272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.051447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.051473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.051630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.051784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.051826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.052029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.052205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.052231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.052404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.052559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.052602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.052793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.052988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.053016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.053219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.053418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.053446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.053648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.053823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.053853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.054022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.054216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.054244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.054441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.054673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.054708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.054872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.055043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.055071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.055275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.055471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.055498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.055706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.055869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.055897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.056064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.056245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.056270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.056417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.056595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.056639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.056840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.057020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.057063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.057224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.057385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.057414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.057605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.057777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.057806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.057978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.058171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.058199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.058368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.058525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.058552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.058721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.058885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.058911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.059159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.059334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.059359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.059533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.059750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.059780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.059979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.060135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.060161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.060315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.060522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.060550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.060750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.060971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.060999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.061199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.061368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.061394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.061567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.061768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.061798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.062022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.062195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.062226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.062498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.062714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.062744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.062913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.063128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.063179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.063380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.063565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.063590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.063797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.064002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.064028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.064204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.064351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.064376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.064556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.064766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.064796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.064978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.065160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.065185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.065367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.065541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.065569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.065782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.065951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.065980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.066148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.066338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.066373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.066577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.066783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.066813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.066988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.067187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.067216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.193 qpair failed and we were unable to recover it. 00:26:45.193 [2024-05-12 07:06:52.067388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.193 [2024-05-12 07:06:52.067588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.067617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.067783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.067953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.067978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.068195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.068387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.068415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.068577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.068767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.068797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.069047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.069299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.069347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.069518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.069718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.069747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.069941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.070180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.070229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.070427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.070625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.070653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.070867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.071046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.071072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.071275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.071505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.071554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.071794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.071962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.071995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.072223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.072442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.072471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.072709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.072889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.072918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.073081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.073259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.073288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.073513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.073724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.073753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.073949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.074131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.074158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.074311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.074528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.074556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.074730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.074907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.074934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.075143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.075342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.075370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.075568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.075738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.075768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.075997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.076220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.076256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.076446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.076668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.076703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.076889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.077099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.077124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.077332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.077557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.077582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.077754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.077984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.078010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.078193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.078457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.078502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.078705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.078857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.078899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.079100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.079295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.079325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.079524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.079701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.079729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.079938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.080110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.080140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.080314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.080539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.080588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.080793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.080968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.080998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.081163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.081399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.081451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.081630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.081813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.081842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.082051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.082278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.082307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.082503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.082729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.082760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.083007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.083336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.083363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.083562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.083792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.083822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.084048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.084293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.084320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.084503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.084685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.084717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.084931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.085205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.085233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.085480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.085636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.085662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.085886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.086084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.086113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.086278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.086531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.086591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.086773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.086941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.086974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.087171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.087406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.087432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.087603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.087787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.087813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.087985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.088149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.088179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.088399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.088548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.088589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.088790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.088987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.089016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.089240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.089430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.089455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.089658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.089856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.089886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.090071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.090289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.090336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.194 qpair failed and we were unable to recover it. 00:26:45.194 [2024-05-12 07:06:52.090571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.090756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.194 [2024-05-12 07:06:52.090786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.090962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.091192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.091243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.091407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.091627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.091657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.091834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.092033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.092062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.092283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.092608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.092662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.092851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.093081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.093109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.093330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.093599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.093624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.093831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.094078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.094105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.094310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.094555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.094583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.094788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.094995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.095025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.095209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.095395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.095424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.095617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.095835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.095864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.096049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.096224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.096267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.096438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.096657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.096686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.096896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.097163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.097215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.097479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.097667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.097702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.097933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.098208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.098258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.098482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.098645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.098675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.098916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.099184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.099234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.099460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.099679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.099717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.099950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.100212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.100238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.100441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.100680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.100754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.100940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.101187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.101213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.101439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.101667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.101693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.101879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.102027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.102053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.102254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.102455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.102484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.102711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.102891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.102921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.103126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.103381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.103408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.103608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.103838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.103866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.104056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.104256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.104344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.104568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.104790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.104819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.104989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.105247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.105295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.105486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.105707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.105748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.105924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.106139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.106174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.106414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.106594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.106621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.106806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.107008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.107037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.107207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.107426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.107478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.107678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.107870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.107896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.108103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.108281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.108310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.108489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.108656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.108685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.108925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.109188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.109235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.109409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.109592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.109619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.109786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.109958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.109988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.110163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.110377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.110405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.110601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.110789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.110816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.110993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.111190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.111219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.111417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.111584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.111612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.111838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.112032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.112062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.112233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.112426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.112456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.112685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.112928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.112966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.113157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.113315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.113344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.113576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.113799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.113828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.113994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.114172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.114197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.195 [2024-05-12 07:06:52.114402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.114574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.195 [2024-05-12 07:06:52.114603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.195 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.114781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.114949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.114977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.115154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.115373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.115425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.115617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.115849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.115876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.116026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.116216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.116241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.116461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.116657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.116685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.116870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.117059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.117084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.117229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.117411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.117437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.117584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.117759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.117784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.117962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.118179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.118208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.118375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.118543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.118573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.118774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.118950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.118998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.119196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.119439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.119489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.119718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.119925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.119959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.120143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.120319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.120362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.120577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.120736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.120777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.120956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.121155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.121188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.121421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.121614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.121641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.121848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.122057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.122082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.122285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.122515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.122563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.122764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.122944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.122976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.123154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.123380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.123407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.123601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.123800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.123830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.124012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.124194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.124238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.124416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.124588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.124629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.124811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.124967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.125007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.125241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.125435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.125463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.125632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.125808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.125834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.126015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.126231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.126259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.126483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.126678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.126744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.126941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.127185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.127220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.127459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.127647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.127675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.127859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.128009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.128050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.128268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.128462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.128488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.128666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.128883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.128912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.129113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.129341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.129367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.129542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.129732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.129785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.130018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.130241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.130269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.130495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.130669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.130701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.130861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.131056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.131085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.131261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.131451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.131478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.131633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.131852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.131882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.132087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.132280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.132310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.132477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.132676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.132711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.132900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.133071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.133099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.133321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.133524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.133553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.133720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.133889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.133917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.134146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.134383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.134433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.134637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.134795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.134821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.196 [2024-05-12 07:06:52.134975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.135148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.196 [2024-05-12 07:06:52.135174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.196 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.135325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.135524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.135552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.135756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.135979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.136028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.136229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.136374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.136399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.136626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.136797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.136827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.137028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.137225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.137253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.137460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.137630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.137671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.137850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.138042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.138071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.138284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.138434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.138477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.138672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.138912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.138938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.139109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.139260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.139286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.139484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.139634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.139660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.139870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.140106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.140155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.140424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.140642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.140671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.140864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.141008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.141034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.141251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.141408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.141433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.141627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.141840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.141867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.142068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.142233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.142258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.142449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.142619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.142650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.142805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.142998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.143026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.143213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.143432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.143460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.143666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.143873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.143902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.144091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.144282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.144311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.144528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.144713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.144738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.144908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.145101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.145129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.145358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.145560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.145585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.145806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.146011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.146036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.146218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.146393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.146419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.146589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.146751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.146779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.146947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.147169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.147198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.147408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.147602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.147630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.147818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.148008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.148036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.148203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.148401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.148426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.148604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.148789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.148816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.148970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.149124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.149165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.149330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.149510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.149535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.149719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.149922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.149951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.150225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.150423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.150451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.150643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.150874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.150901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.151094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.151425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.151475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.151643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.151870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.151898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.152130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.152277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.152302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.152508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.152718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.152747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.152937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.153115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.153141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.153290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.153473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.153501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.153733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.153964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.153993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.154166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.154353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.154381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.154578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.154810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.154839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.155036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.155292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.155320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.155515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.155676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.155711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.155920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.156186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.156236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.156439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.156640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.156666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.197 qpair failed and we were unable to recover it. 00:26:45.197 [2024-05-12 07:06:52.156884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.197 [2024-05-12 07:06:52.157037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.157062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.157313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.157573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.157599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.157774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.157965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.157993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.158215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.158415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.158443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.158638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.158837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.158866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.159169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.159543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.159593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.159810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.159985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.160015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.160242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.160538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.160588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.160824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.160993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.161022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.161224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.161440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.161469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.161669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.161885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.161914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.162131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.162328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.162358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.162578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.162791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.162818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.163024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.163231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.163260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.163452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.163674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.163711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.163875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.164051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.164080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.164249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.164486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.164543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.164768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.164987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.165020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.165194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.165402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.165429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.165611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.165835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.165864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.166112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.166267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.166293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.166490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.166691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.166729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.166979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.167265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.167317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.167513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.167722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.167761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.167921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.168142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.168171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.168411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.168600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.168628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.168850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.169073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.169125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.169326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.169496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.169527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.169708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.169953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.169979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.170125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.170280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.170325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.170519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.170718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.170753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.170927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.171162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.171192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.171390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.171635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.171663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.171875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.172113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.172170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.172341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.172555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.172602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.172826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.173037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.173067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.173288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.173592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.173645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.173847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.174028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.174068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.174266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.174485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.174511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.174727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.174966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.175018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.175212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.175438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.175467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.175706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.175917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.175944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.176151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.176375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.176434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.176667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.176904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.176932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.198 qpair failed and we were unable to recover it. 00:26:45.198 [2024-05-12 07:06:52.177134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.177296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.198 [2024-05-12 07:06:52.177326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.177505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.177684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.177744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.177894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.178090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.178120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.178332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.178552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.178601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.178797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.179094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.179147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.179369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.179567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.179597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.179789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.179990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.180019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.180214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.180418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.180445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.180647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.180842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.180872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.181221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.181576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.181601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.181836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.182086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.182113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.182299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.182449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.182476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.182637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.182870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.182900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.183188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.183405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.183433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.183630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.183832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.183862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.184043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.184206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.184231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.184388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.184607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.184637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.184805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.185024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.185077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.185308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.185537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.185567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.185739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.185938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.185970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.186175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.186393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.186421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.186625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.186832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.186859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.187090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.187316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.187343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.187540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.187737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.187767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.187961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.188196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.188223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.188470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.188728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.188766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.188961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.189166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.189196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.189401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.189692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.189728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.189922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.190260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.190311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.190527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.190721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.190759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.190970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.191270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.191335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.191563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.191780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.191818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.192028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.192235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.192264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.192433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.192642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.192670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.192862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.193075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.193106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.193285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.193479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.193508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.193731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.193943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.193978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.194181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.194502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.194552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.194759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.194966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.194994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.195226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.195450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.195499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.195729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.195883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.195909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.196132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.196365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.196394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.196560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.196782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.196810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.197031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.197241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.197293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.197491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.197689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.197725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.197930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.198125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.198154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.198349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.198522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.198548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.198742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.198944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.198984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.199180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.199374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.199403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.199584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.199724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.199753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.199911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.200111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.200142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.200348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.200550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.200577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.200776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.200997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.201025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.201301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.201520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.201547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.199 qpair failed and we were unable to recover it. 00:26:45.199 [2024-05-12 07:06:52.201754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.199 [2024-05-12 07:06:52.201955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.201987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.202195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.202396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.202426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.202619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.202851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.202878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.203054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.203249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.203278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.203507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.203661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.203687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.203897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.204089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.204137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.204336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.204531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.204557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.204759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.204991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.205021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.205214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.205412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.205442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.205672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.205885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.205913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.206068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.206266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.206294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.206504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.206712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.206753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.206929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.207140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.207169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.207376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.207557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.207584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.207743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.207924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.207949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.208127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.208396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.208425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.208622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.208857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.208883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.209071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.209264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.209292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.209521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.209680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.209712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.209951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.210143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.210169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.210404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.210639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.210665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.210859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.211063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.211102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.211331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.211479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.211505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.211717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.211898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.211928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.212122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.212321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.212348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.212555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.212817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.212872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.213064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.213353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.213405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.213599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.213776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.213806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.213972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.214139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.214169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.214361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.214560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.214588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.214827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.215054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.215083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.215273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.215532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.215585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.215818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.216033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.216100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.216311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.216534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.216563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.216729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.216933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.216963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.217268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.217461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.217490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.217670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.217883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.217911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.218143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.218319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.218345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.218525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.218715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.218752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.218972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.219304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.219354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.219559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.219764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.219790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.219978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.220192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.220240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.220447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.220650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.220677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.220893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.221150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.221179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.221353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.221632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.221694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.221903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.222104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.222134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.222329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.222530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.222557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.222716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.222923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.222950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.223175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.223394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.223442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.223647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.223874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.223904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.224098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.224312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.224340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.224574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.224739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.224769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.224994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.225332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.225390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.225616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.225790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.225817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.200 qpair failed and we were unable to recover it. 00:26:45.200 [2024-05-12 07:06:52.226018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.226213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.200 [2024-05-12 07:06:52.226241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.226403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.226632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.226661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.226906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.227087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.227117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.227345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.227542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.227572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.227792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.228006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.228058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.228335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.228532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.228561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.228761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.228943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.228973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.229149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.229356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.229421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.229649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.229806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.229850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.230047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.230231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.230257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.230453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.230673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.230709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.230915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.231092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.231118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.231273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.231500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.231527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.231770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.231970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.231999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.232226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.232423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.232450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.232627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.232806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.232836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.233069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.233303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.233329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.233651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.233863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.233889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.234071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.234250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.234279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.234476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.234720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.234750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.234952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.235144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.235174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.235489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.235718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.235749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.235942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.236134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.236164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.236371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.236574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.236603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.236790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.236980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.237010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.237245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.237540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.237569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.237804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.237982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.238009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.238195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.238398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.238424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.238629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.238827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.238861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.239210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.239632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.239694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.239932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.240294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.240346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.240554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.240823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.240853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.241061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.241212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.241256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.241505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.241728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.241757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.241945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.242214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.242267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.242458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.242666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.242691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.242945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.243146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.243172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.243372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.243606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.243635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.243828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.244050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.244084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.244260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.244602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.244652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.244856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.245060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.245091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.245369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.245569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.245595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.245809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.246015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.246056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.246265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.246436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.246467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.246668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.246868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.246898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.247175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.247403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.247430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.247633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.247815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.247856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.201 qpair failed and we were unable to recover it. 00:26:45.201 [2024-05-12 07:06:52.248057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.201 [2024-05-12 07:06:52.248263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.248290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.248520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.248709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.248738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.248936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.249137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.249178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.249415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.249607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.249636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.249844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.249999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.250026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.250229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.250488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.250541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.250741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.250911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.250939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.251099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.251294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.251325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.251713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.251907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.251936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.252132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.252385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.252414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.252607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.252808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.252838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.253037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.253247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.253273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.253421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.253612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.253638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.253830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.254077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.254125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.254367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.254505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.254531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.254725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.254908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.254934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.255148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.255315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.255341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.255556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.255897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.255944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.256204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.256423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.256452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.256651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.256868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.256895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.257135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.257314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.257341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.257524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.257709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.257736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.258040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.258364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.258394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.258623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.258812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.258842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.259037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.259348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.259377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.259578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.259803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.259831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.260030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.260348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.260397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.260597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.260804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.260834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.261040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.261299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.261332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.261529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.261745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.261771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.261962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.262145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.262174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.262372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.262561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.262590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.262802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.263052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.263082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.263282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.263564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.263593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.263824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.264060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.264086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.264338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.264649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.264703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.264920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.265136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.265163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.265404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.265635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.265664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.265867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.266077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.266104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.266352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.266627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.266680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.266981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.267239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.267269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.267434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.267656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.267686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.267899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.268125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.268155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.268390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.268586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.268616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.268839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.269039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.269069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.269289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.269588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.269638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.269865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.270025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.270052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.270260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.270576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.270628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.270889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.271100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.271129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.271325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.271527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.271552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.271732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.271902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.271929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.272102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.272275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.272301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.272475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.272676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.272713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.272914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.273065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.202 [2024-05-12 07:06:52.273107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.202 qpair failed and we were unable to recover it. 00:26:45.202 [2024-05-12 07:06:52.273330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.273544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.273596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.273765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.273994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.274020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.274200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.274406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.274435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.274632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.274814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.274858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.275073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.275296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.275325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.275517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.275719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.275747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.275901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.276092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.276121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.276343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.276580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.276606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.276838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.277035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.277064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.277297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.277475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.277501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.277656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.277847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.277875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.278107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.278318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.278348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.278598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.278774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.278801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.278977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.279126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.279153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.279329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.279530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.279556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.279765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.279973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.280000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.280194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.280385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.280414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.280637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.280817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.280844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.281020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.281226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.281257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.281493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.281706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.281733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.281907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.282087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.282116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.282354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.282565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.282620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.282819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.283077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.283125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.283334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.283533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.283559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.283761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.283982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.284012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.284234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.284577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.284627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.284820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.284998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.285022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.285224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.285421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.285451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.285641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.285837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.285867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.286067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.286350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.286403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.286604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.286836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.286866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.287100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.287299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.287348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.287542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.287721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.287748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.287929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.288110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.288136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.288316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.288518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.288548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.288830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.289026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.289056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.289393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.289635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.289664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.289877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.290064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.290093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.290265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.290416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.290458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.290623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.290813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.290848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.291164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.291424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.291451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.291661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.291953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.291984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.292179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.292378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.292405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.292637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.292868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.292898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.293100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.293282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.293308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.293487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.293663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.293692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.203 [2024-05-12 07:06:52.293878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.294057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.203 [2024-05-12 07:06:52.294083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.203 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.294287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.294486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.294516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.294723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.294941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.294968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.295207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.295473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.295524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.295721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.295926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.295953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.296156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.296485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.296536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.296717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.296886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.296930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.297136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.297328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.297357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.297593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.297825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.297855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.298075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.298272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.298301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.298499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.298723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.298753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.298947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.299168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.299197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.299396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.299597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.299627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.299795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.300005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.300082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.300314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.300538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.300567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.300806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.300964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.300990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.301203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.301410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.301439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.301631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.301828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.301857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.302094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.302374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.302425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.302646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.302874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.302901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.303106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.303325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.303372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.303609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.303801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.303828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.304007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.304203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.304232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.304423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.304604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.304630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.304795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.304991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.305019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.305238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.305470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.305497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.305700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.305920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.305950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.204 [2024-05-12 07:06:52.306147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.306351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.204 [2024-05-12 07:06:52.306381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.204 qpair failed and we were unable to recover it. 00:26:45.475 [2024-05-12 07:06:52.306600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.306871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.306899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-05-12 07:06:52.307072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.307270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.307299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-05-12 07:06:52.307493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.307679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.307742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-05-12 07:06:52.307906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.308153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.308181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-05-12 07:06:52.308385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.308590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.308618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-05-12 07:06:52.308824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.309027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.309054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-05-12 07:06:52.309251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.309453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.309484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-05-12 07:06:52.309681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.309888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-05-12 07:06:52.309917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-05-12 07:06:52.310087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.310258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.310305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.310496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.310660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.310688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.310897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.311113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.311142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.311302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.311495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.311525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.311731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.312029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.312082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.312282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.312505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.312532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.312733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.312931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.312962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.313128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.313330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.313357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.313565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.313777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.313812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.314039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.314305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.314362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.314656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.314897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.314924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.315088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.315285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.315313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.315530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.315732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.315762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.315972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.316158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.316201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.316402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.316600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.316629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.316814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.317028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.317057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.317259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.317463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.317493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.317665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.317866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.317896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.318157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.318385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.318414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.318597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.318754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.318798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.318986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.319159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.319185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.319341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.319506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.319550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.319744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.319973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.320000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.320230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.320480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.320507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.320713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.320907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.320934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.321156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.321501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.321559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.321770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.321973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.322000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-05-12 07:06:52.322203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.322478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-05-12 07:06:52.322527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.322756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.322988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.323015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.323219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.323426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.323456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.323677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.323887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.323916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.324115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.324393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.324422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.324654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.324868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.324898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.325121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.325463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.325518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.325723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.325903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.325933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.326141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.326317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.326360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.326526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.326729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.326760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.326959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.327187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.327244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.327623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.327898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.327925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.328082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.328265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.328292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.328473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.328653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.328704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.328898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.329072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.329101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.329330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.329497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.329526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.329728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.329959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.329988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.330219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.330419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.330449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.330643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.330853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.330881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.331053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.331295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.331359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.331532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.331706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.331737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.331974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.332249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.332278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-05-12 07:06:52.332472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.332703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-05-12 07:06:52.332741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.332966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.333136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.333165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.333335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.333525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.333555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.333781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.333956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.333985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.334141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.334489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.334542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.334765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.335006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.335072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.335304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.335504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.335533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.335760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.335938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.335967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.336172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.336391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.336416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.336621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.336831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.336856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.337072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.337267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.337301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.337526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.337753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.337784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.337982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.338177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.338206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.338433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.338652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.338692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.338882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.339106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.339132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.339347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.339769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.339796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.339986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.340169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.340210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.340398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.340593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.340623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.340846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.341073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.341100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.341320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.341554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.341583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.341781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.341973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.342007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.342190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.342385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.342411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.342632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.342808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.342852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.343060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.343241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-05-12 07:06:52.343267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-05-12 07:06:52.343466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.343689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.343725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.343917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.344135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.344161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.344405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.344603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.344632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.344862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.345066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.345093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.345262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.345442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.345469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.345670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.345890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.345920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.346087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.346240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.346280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.346487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.346727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.346758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.346956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.347175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.347201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.347368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.347565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.347592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.347806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.347999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.348029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.348281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.348465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.348492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.348702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.348904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.348931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.349150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.349389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.349417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.349636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.349793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.349823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.350028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.350259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.350302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.350542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.350683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.350731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.350920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.351225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.351255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.351474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.351665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.351707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.351943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.352141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.352170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.352378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.352736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.352767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.352993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.353383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.353456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.353679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.353902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.353929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.479 qpair failed and we were unable to recover it. 00:26:45.479 [2024-05-12 07:06:52.354131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.479 [2024-05-12 07:06:52.354452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.354503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.354716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.354897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.354923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.355138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.355462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.355513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.355701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.355886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.355912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.356095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.356274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.356300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.356484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.356634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.356660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.356873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.357080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.357106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.357332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.357644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.357701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.357901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.358133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.358162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.358384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.358605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.358634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.358806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.359002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.359032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.359226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.359545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.359598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.359765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.359937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.359963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.360168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.360362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.360390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.360587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.360789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.360817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.361016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.361244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.361271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.361438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.361607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.361636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.361806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.361977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.362005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.362180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.362351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.362380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.362584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.362759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.362789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.362992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.363195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.363224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.363417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.363588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.363617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.363833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.364033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.364060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.480 [2024-05-12 07:06:52.364261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.364474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.480 [2024-05-12 07:06:52.364522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.480 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.364701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.364880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.364929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.365133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.365354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.365383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.365607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.365830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.365860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.366038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.366262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.366291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.366508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.366655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.366682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.366841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.366990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.367016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.367256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.367418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.367447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.367644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.367811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.367840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.368045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.368223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.368250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.368402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.368607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.368636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.368831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.369051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.369080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.369312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.369513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.369562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.369766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.369922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.369948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.370147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.370351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.370378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.370606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.370801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.370831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.371034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.371227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.481 [2024-05-12 07:06:52.371255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.481 qpair failed and we were unable to recover it. 00:26:45.481 [2024-05-12 07:06:52.371443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.371600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.371626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.371802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.372001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.372030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.372238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.372435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.372464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.372663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.372870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.372897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.373077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.373238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.373265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.373445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.373644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.373674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.373904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.374182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.374236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.374424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.374620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.374648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.374840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.375056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.375106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.375327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.375614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.375664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.375895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.376099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.376125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.376303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.376479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.376505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.376694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.376907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.376936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.377109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.377308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.377336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.377508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.377682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.377720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.377908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.378085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.378131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.378358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.378575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.378625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.378821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.378991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.379020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.379233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.379427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.379453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.379647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.379849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.379878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.380103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.380353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.380382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.380553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.380757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.482 [2024-05-12 07:06:52.380784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.482 qpair failed and we were unable to recover it. 00:26:45.482 [2024-05-12 07:06:52.380987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.381245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.381272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.381503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.381706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.381736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.381907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.382089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.382119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.382291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.382446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.382474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.382683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.382872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.382901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.383094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.383300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.383330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.383594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.383799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.383829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.384003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.384170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.384199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.384377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.384556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.384601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.384800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.384970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.384996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.385147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.385374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.385403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.385608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.385783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.385814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.385992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.386181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.386236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.386441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.386626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.386674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.386864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.387064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.387119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.387296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.387517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.387544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.387749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.387950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.387978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.388184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.388363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.388392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.388617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.388794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.388821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.388976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.389172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.389201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.389370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.389543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.389572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.389746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.389919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.389948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.390127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.390267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.390309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.483 [2024-05-12 07:06:52.390505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.390728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.483 [2024-05-12 07:06:52.390758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.483 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.391041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.391242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.391273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.391545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.391721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.391752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.391932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.392090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.392133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.392400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.392612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.392639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.392787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.392988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.393014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.393238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.393429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.393458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.393647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.393840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.393870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.394076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.394278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.394303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.394514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.394723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.394753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.394958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.395110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.395152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.395360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.395538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.395565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.395790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.395960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.395990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.396187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.396384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.396412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.396606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.396763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.396806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.396985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.397160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.397189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.397400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.397550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.397576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.397763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.397933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.397961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.398131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.398304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.398333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.398522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.398730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.398759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.398949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.399209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.399259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.399514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.399704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.399734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.484 [2024-05-12 07:06:52.399962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.400127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.484 [2024-05-12 07:06:52.400156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.484 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.400355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.400601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.400654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.400895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.401068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.401096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.401300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.401462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.401488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.401718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.401889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.401917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.402090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.402248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.402274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.402447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.402625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.402655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.402854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.403005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.403031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.403231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.403431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.403460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.403658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.403839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.403868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.404074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.404261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.404290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.404464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.404657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.404687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.404867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.405032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.405060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.405265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.405464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.405493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.405704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.405865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.405891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.406065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.406257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.406287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.406486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.406674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.406718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.406922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.407114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.407143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.407377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.407556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.407582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.407782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.407969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.408000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.408204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.408374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.408400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.408579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.408748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.485 [2024-05-12 07:06:52.408778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.485 qpair failed and we were unable to recover it. 00:26:45.485 [2024-05-12 07:06:52.409000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.409155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.409202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.409396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.409578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.409604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.409761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.409935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.409960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.410140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.410291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.410318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.410561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.410713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.410756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.410934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.411139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.411166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.411365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.411588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.411617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.411826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.412000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.412034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.412232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.412431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.412461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.412626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.412820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.412850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.413052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.413230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.413255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.413409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.413585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.413630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.413826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.413982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.414008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.414197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.414453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.414479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.414639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.414816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.414843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.414994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.415167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.415193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.415339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.415557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.415585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.415775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.415963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.415993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.416168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.416362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.416390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.416557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.416719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.416747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.416989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.417218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.417271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.417517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.417694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.417745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.486 [2024-05-12 07:06:52.417943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.418134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.486 [2024-05-12 07:06:52.418183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.486 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.418384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.418548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.418577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.418735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.418933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.418958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.419142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.419414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.419443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.419681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.419849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.419875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.420059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.420290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.420319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.420495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.420716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.420742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.420922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.421149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.421177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.421401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.421599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.421627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.421850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.422054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.422084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.422288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.422570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.422599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.422809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.422965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.423012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.423180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.423390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.423440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.423649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.423865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.423894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.424067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.424248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.424275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.424477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.424779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.424808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.425079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.425389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.425436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.425655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.425887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.487 [2024-05-12 07:06:52.425916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.487 qpair failed and we were unable to recover it. 00:26:45.487 [2024-05-12 07:06:52.426108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.426341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.426393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.426550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.426777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.426806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.427028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.427227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.427252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.427456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.427656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.427685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.427869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.428118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.428172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.428447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.428667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.428704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.428908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.429113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.429140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.429324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.429551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.429581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.429790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.429971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.429996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.430171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.430468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.430527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.430738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.430901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.430958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.431188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.431343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.431370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.431631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.431826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.431855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.432028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.432252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.432307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.432477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.432663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.432691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.432934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.433148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.433197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.433419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.433584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.433613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.433810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.434016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.434042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.434193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.434369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.434428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.434663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.434870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.434901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.435082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.435241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.435270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.435446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.435638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.435666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.488 qpair failed and we were unable to recover it. 00:26:45.488 [2024-05-12 07:06:52.435883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.488 [2024-05-12 07:06:52.436076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.436102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.436285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.436585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.436636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.436857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.437009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.437035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.437193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.437397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.437423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.437595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.437781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.437807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.437954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.438152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.438178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.438425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.438569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.438595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.438795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.438980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.439005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.439228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.439573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.439623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.439824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.439979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.440006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.440149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.440332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.440362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.440554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.440750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.440776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.440947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.441172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.441224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.441389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.441583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.441612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.441808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.441982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.442010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.442216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.442393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.442418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.442596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.442829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.442855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.443016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.443161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.443204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.443396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.443599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.443625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.443804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.444033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.444062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.444261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.444415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.444456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.444662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.444841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.489 [2024-05-12 07:06:52.444871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.489 qpair failed and we were unable to recover it. 00:26:45.489 [2024-05-12 07:06:52.445075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.445226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.445251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.445463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.445660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.445688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.445899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.446065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.446092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.446289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.446490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.446519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.446685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.446905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.446931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.447113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.447338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.447362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.447532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.447726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.447755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.447960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.448162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.448190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.448386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.448610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.448635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.448839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.449033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.449062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.449239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.449412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.449440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.449645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.449831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.449857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.450027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.450277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.450329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.450584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.450784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.450813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.450985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.451180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.451208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.451383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.451537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.451564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.451738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.451939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.451966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.452211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.452438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.452489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.452684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.452896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.452922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.453101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.453276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.453301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.453447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.453617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.453643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.453817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.453967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.490 [2024-05-12 07:06:52.453992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.490 qpair failed and we were unable to recover it. 00:26:45.490 [2024-05-12 07:06:52.454171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.454345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.454370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.454568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.454739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.454766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.454945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.455151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.455180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.455371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.455544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.455577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.455745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.455938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.455967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.456177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.456335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.456360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.456511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.456688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.456723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.456945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.457115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.457143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.457336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.457523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.457551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.457721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.457921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.457950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.458116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.458340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.458368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.458539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.458703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.458731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.458894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.459118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.459146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.459323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.459559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.459588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.459768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.459962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.459989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.460158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.460346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.460375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.460536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.460740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.460768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.460975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.461174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.461202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.461391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.461584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.461612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.461807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.462001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.462029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.462222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.462458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.462484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.462636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.462792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.462818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.463027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.463220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.463246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.491 qpair failed and we were unable to recover it. 00:26:45.491 [2024-05-12 07:06:52.463442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.491 [2024-05-12 07:06:52.463605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.463633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.463837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.464038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.464065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.464271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.464465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.464493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.464693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.464889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.464916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.465193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.465444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.465492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.465728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.465921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.465946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.466130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.466298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.466324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.466523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.466716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.466742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.466912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.467163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.467211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.467418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.467595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.467621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.467801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.467982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.468026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.468226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.468404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.468447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.468648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.468851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.468879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.469117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.469282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.469308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.469501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.469727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.469756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.469925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.470092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.470121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.470405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.470646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.470670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.470896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.471071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.471101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.471306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.471477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.471507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.471683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.471863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.471891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.472101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.472279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.472321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.472528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.472713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.472738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.472919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.473072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.473097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.492 qpair failed and we were unable to recover it. 00:26:45.492 [2024-05-12 07:06:52.473294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.473525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.492 [2024-05-12 07:06:52.473551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.473760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.473960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.473988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.474155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.474410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.474459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.474678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.474854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.474883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.475078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.475227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.475252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.475465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.475631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.475659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.475877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.476050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.476092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.476291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.476470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.476514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.476711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.476866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.476896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.477106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.477294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.477322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.477506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.477685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.477718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.477879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.478051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.478100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.478315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.478512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.478539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.478713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.478887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.478913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.479118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.479310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.479338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.479532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.479716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.479743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.479896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.480120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.480170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.480363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.480535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.480564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.480743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.480915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.480951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.481156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.481311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.481337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.481508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.481718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.481744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.481956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.482208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.482256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.482464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.482615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.482641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.482798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.483052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.483098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.493 [2024-05-12 07:06:52.483312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.483517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.493 [2024-05-12 07:06:52.483542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.493 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.483754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.483956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.483981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.484128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.484360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.484408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.484604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.484845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.484876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.485051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.485244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.485272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.485478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.485674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.485708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.485891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.486062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.486091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.486262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.486416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.486442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.486596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.486790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.486819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.487022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.487253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.487278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.487458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.487605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.487631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.487808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.487978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.488006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.488233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.488473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.488498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.488654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.488814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.488840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.489021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.489261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.489307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.489560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.489777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.489804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.489951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.490129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.490157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.490384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.490606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.490634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.490827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.491021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.491050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.491231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.491432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.491461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.491627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.491852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.491878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.492082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.492267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.492315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.492472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.492681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.492715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.492897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.493071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.493097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.494 qpair failed and we were unable to recover it. 00:26:45.494 [2024-05-12 07:06:52.493280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.493473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.494 [2024-05-12 07:06:52.493501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.493737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.493924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.493950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.494126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.494331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.494380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.494560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.494765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.494792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.495006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.495201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.495228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.495469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.495656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.495685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.495886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.496075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.496103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.496280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.496504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.496532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.496721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.496925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.496951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.497147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.497394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.497419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.497590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.497795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.497821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.497976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.498142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.498174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.498348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.498502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.498527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.498711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.498911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.498940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.499140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.499396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.499432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.499648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.499851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.499879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.500079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.500224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.500248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.500417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.500618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.500643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.500849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.501024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.501051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.501286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.501437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.501478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.501653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.501862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.501889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.502076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.502380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.502412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.502592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.502751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.502795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.502993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.503158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.503187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.503383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.503619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.503645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.495 qpair failed and we were unable to recover it. 00:26:45.495 [2024-05-12 07:06:52.503825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.504003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.495 [2024-05-12 07:06:52.504031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.504227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.504392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.504419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.504612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.504764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.504791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.504998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.505198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.505226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.505405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.505604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.505631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.505800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.505954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.505982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.506151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.506304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.506345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.506543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.506782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.506811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.506982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.507218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.507263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.507486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.507682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.507719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.507953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.508189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.508216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.508414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.508639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.508664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.508819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.508974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.509000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.509197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.509385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.509412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.509643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.509797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.509823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.510006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.510261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.510309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.510514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.510687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.510724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.510951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.511178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.511206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.511407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.511608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.511637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.511832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.511993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.512022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.512216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.512540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.512588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.512789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.513001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.513051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.513253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.513478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.513507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.496 qpair failed and we were unable to recover it. 00:26:45.496 [2024-05-12 07:06:52.513683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.496 [2024-05-12 07:06:52.513889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.513914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.514096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.514296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.514321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.514480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.514653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.514678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.514854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.515048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.515075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.515264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.515438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.515465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.515664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.515898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.515924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.516090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.516290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.516316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.516496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.516669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.516705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.516878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.517057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.517092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.517384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.517598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.517624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.517788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.517963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.517991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.518185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.518384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.518411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.518581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.518782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.518812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.518997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.519218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.519244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.519440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.519642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.519671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.519876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.520104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.520133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.520336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.520507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.520532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.520738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.520969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.520997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.521196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.521422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.521448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.521600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.521784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.521828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.522018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.522265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.522291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.522467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.522651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.522679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.522863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.523059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.523087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.497 qpair failed and we were unable to recover it. 00:26:45.497 [2024-05-12 07:06:52.523264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.497 [2024-05-12 07:06:52.523413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.523456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.523628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.523819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.523852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.524024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.524239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.524290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.524487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.524663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.524690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.524874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.525024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.525067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.525298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.525542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.525590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.525807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.526007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.526032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.526214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.526418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.526446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.526644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.526802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.526828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.527012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.527215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.527241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.527393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.527576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.527603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.527769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.527935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.527963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.528128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.528345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.528374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.528593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.528778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.528804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.529008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.529251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.529277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.529460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.529665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.529691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.529897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.530075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.530100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.498 qpair failed and we were unable to recover it. 00:26:45.498 [2024-05-12 07:06:52.530247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.498 [2024-05-12 07:06:52.530420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.530445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.530613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.530811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.530840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.530998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.531192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.531221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.531421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.531614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.531642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.531813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.531972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.532001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.532205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.532377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.532418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.532581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.532812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.532842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.533018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.533175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.533200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.533439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.533637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.533663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.533897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.534073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.534100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.534309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.534476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.534504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.534719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.534902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.534927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.535109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.535318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.535343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.535618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.535769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.535795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.535973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.536146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.536171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.536324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.536495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.536521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.536722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.536922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.536946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.537150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.537339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.537367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.537566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.537744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.537773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.537977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.538150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.538193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.538374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.538551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.538580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.538811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.538965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.499 [2024-05-12 07:06:52.539007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.499 qpair failed and we were unable to recover it. 00:26:45.499 [2024-05-12 07:06:52.539176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.539372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.539399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.539600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.539780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.539825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.539986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.540216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.540240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.540418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.540644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.540672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.540919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.541100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.541126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.541283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.541458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.541483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.541682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.541921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.541948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.542162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.542463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.542513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.542738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.542925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.542954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.543181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.543350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.543379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.543544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.543747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.543776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.543965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.544231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.544259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.544433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.544599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.544628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.544829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.545005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.545041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.545241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.545438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.545467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.545660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.545834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.545862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.546097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.546292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.546320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.546540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.546733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.546762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.546929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.547162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.547190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.547376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.547567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.547595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.547770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.547996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.548021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.548237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.548422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.548449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.548677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.548859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.548888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.549115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.549347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.549375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.500 qpair failed and we were unable to recover it. 00:26:45.500 [2024-05-12 07:06:52.549611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.500 [2024-05-12 07:06:52.549761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.549804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.550008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.550157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.550183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.550383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.550589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.550614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.550789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.551025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.551054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.551232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.551414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.551439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.551595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.551805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.551830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.552011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.552168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.552194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.552385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.552590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.552615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.552774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.552975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.553000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.553176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.553380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.553408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.553587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.553776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.553805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.554026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.554273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.554298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.554493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.554650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.554677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.554876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.555076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.555104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.555277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.555504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.555533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.555710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.555887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.555915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.556118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.556347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.556373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.556545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.556691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.556740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.556936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.557138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.557164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.557338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.557493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.557517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.557707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.557878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.557906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.558104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.558319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.558366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.558587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.558767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.558793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.558970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.559143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.559168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.559370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.559565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.559593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.559798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.559975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.560016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.560179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.560348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.560376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.560600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.560773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.560800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.560956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.561130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.561172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.501 [2024-05-12 07:06:52.561372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.561538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.501 [2024-05-12 07:06:52.561566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.501 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.561741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.561946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.561972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.562173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.562359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.562388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.562607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.562829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.562856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.563030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.563257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.563285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.563516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.563693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.563726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.563898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.564113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.564139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.564287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.564443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.564469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.564650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.564795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.564839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.565004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.565200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.565229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.565401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.565570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.565598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.565805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.565959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.565989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.566183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.566328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.566369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.566533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.566703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.566733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.566940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.567098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.567140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.567307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.567497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.567525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.567719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.567888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.567917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.568111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.568304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.568332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.568501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.568727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.568756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.568956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.569177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.569228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.569439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.569594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.569619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.569788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.569988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.570021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.570245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.570537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.570585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.570790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.570955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.570983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.571189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.571388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.571417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.571610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.571785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.571811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.572039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.572238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.572264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.572458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.572656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.572686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.572868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.573048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.502 [2024-05-12 07:06:52.573074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.502 qpair failed and we were unable to recover it. 00:26:45.502 [2024-05-12 07:06:52.573273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.573429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.573457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.573653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.573834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.573863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.574092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.574244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.574286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.574516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.574666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.574692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.574862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.575067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.575093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.575295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.575556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.575585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.575787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.575993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.576019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.576194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.576342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.576386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.576598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.576794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.576824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.577003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.577243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.577272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.577467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.577660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.577691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.577922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.578125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.578153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.578350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.578540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.578567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.578792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.578984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.579013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.579202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.579409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.579434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.579612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.579792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.579821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.580015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.580180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.580207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.503 qpair failed and we were unable to recover it. 00:26:45.503 [2024-05-12 07:06:52.580404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.580597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.503 [2024-05-12 07:06:52.580623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.580810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.580988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.581030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.581231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.581395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.581423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.581627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.581777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.581803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.582003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.582261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.582307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.582497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.582705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.582734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.582906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.583084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.583114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.583275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.583444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.583475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.583669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.583878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.583904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.584086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.584281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.584309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.584479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.584665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.584693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.584938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.585171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.585197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.585373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.585524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.585550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.585731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.585938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.585964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.586148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.586315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.586343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.586515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.586721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.586750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.586946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.587127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.587172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.587371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.587536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.587564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.587735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.587891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.587916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.588093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.588247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.588289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.588518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.588716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.588744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.588939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.589128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.589157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.589353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.589534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.589561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.589798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.589947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.589990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.590226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.590405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.590430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.590631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.590819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.590845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.591003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.591208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.591238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.504 [2024-05-12 07:06:52.591421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.591616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.504 [2024-05-12 07:06:52.591644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.504 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.591850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.592044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.592078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.592301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.592507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.592533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.592736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.592893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.592918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.593117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.593280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.593308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.593502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.593727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.593756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.593929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.594157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.594183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.594335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.594531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.594559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.594770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.594961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.594989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.595227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.595444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.595469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.595703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.595927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.595954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.596191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.596420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.596476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.596677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.596854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.596882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.597074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.597272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.597297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.597496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.597671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.597709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.597884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.598062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.598087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.598239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.598449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.598478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.598678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.598866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.598892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.599080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.599277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.599305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.599539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.599742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.599771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.599944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.600130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.600158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.600356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.600595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.600623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.600831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.601035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.601060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.601251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.601406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.601431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.601706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.601902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.601931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-05-12 07:06:52.602164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-05-12 07:06:52.602389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.602414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.602614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.602845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.602872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.603033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.603228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.603256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.603426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.603638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.603666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.603849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.604056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.604083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.604239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.604450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.604478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.604675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.604833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.604857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.605013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.605161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.605185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.605360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.605510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.605536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.605739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.605905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.605933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.606125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.606347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.606375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.606592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.606800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.606826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.606974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.607199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.607227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.607400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.607579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.607608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.607800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.607948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.607990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.608157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.608361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.608389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.608611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.608809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.608840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.609049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.609251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.609300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.609468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.609627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.609651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.609854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.610074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.610126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.610346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.610542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.610568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.610721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.610898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.610923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.611103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.611358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.611404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.611604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.611803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.611832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.612034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.612212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.612236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.612387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.612539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.612568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.612728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.612922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.612950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.613114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.613300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.613330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.613524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.613681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.613717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.613910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.614104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-05-12 07:06:52.614151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-05-12 07:06:52.614348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.614557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.614605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.614775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.614969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.614998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.615199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.615382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.615407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.615607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.615804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.615833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.616061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.616245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.616270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.616465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.616663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.616691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.616900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.617072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.617100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.617295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.617517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.617566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.617777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.617979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.618008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.618204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.618394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.618441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.618635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.618833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.618862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.619043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.619224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.619249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.619401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.619599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.619627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.619786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.619962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.619991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.620203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.620374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.620403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.620603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.620748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.620774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.620964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.621155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.621183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.621371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.621562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.621590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.621786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.621964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.621991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.622196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.622394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.622422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.622624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.622865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.622912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.623089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.623307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.623335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.623528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.623692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.623727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.623890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.624111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.624136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.624343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.624567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.624592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.624753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.624907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.624933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.625166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.625367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.625393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.625572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.625713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.625739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.625920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.626123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.626148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-05-12 07:06:52.626300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.626473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-05-12 07:06:52.626499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.626714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.626926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.626955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.627139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.627324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.627366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.627571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.627769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.627798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.627963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.628129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.628157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.628329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.628534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.628559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.628760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.628932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.628961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.629137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.629296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.629321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.629513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.629710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.629736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.629885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.630045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.630069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.630217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.630431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.630465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.630687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.630933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.630960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.631131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.631337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.631364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.631557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.631716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.631746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.631944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.632152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.632180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.632385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.632587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.632614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.632783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.632980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.633009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.633191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.633387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.633419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.633595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.633754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.633781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.633959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.634132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.634161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.634362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.634543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.634568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.634808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.634976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.635004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.635192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.635426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.635474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.635680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.635843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.635869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.636120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.636328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.636357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.636550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.636743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.636768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.636951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.637148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.637176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.637367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.637597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.637630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.637857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.638052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.638081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.638309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.638509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.638546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-05-12 07:06:52.638733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-05-12 07:06:52.638918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.638946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.639146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.639348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.639377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.639540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.639708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.639752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.639908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.640097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.640122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.640295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.640484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.640513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.640684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.640848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.640873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.641082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.641267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.641292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.641441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.641647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.641672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.641898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.642054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.642079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.642282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.642458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.642483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.642627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.642796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.642823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.643026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.643228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.643276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.643439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.643611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.643639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.643819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.644015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.644043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.644235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.644486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.644534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.644708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.644913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.644938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.645111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.645287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.645315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.645491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.645687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.645730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.645955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.646102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.646128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.646307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.646510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.646538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.646735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.646907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.646936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.647131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.647288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.647313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.647515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.647730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.647757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.647984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.648183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.648211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-05-12 07:06:52.648421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.648593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-05-12 07:06:52.648635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.648873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.649048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.649077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.649263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.649470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.649498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.649763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.649915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.649940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.650146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.650318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.650346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.650542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.650748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.650778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.650949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.651131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.651178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.651397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.651595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.651624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.651789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.651996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.652021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.652224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.652458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.652504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.652682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.652858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.652884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.653063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.653241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.653267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.653502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.653684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.653716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.653899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.654102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.654130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.654294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.654488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.654535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.654737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.654906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.654936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.655107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.655310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.655336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.655489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.655657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.655685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.655908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.656114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.656141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.656366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.656537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.656564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.656761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.656925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.656955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.657128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.657310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.657336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.657512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.657676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.657708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.657867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.658045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.658070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.658312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.658536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.658588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.658827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.659045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.659074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.659269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.659464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.659492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.659663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.659878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.659904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.660056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.660261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.660290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.660470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.660652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.660703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-05-12 07:06:52.660880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.661080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-05-12 07:06:52.661108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.661330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.661492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.661521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.661682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.661848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.661877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.662080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.662231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.662257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.662476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.662674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.662715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.662955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.663097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.663122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.663290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.663486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.663515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.663788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.663966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.663992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.664170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.664346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.664371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.664575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.664777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.664805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.665004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.665211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.665239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.665418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.665569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.665613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.665819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.665974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.666016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.666193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.666381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.666409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.666569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.666759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.666811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.666996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.667168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.667194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.667399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.667595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.667624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.667829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.668065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.668091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.668341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.668537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.668565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.668764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.668957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.668985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.669163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.669320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.669346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.669573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.669728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.669754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.669911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.670085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.670129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.670299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.670436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.670461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.670668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.670848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.670878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.671075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.671289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.671317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.671489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.671634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.671660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.671913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.672074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.672101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.672295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.672529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.672555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-05-12 07:06:52.672739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.672941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-05-12 07:06:52.672969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.673167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.673356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.673383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.673578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.673779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.673808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.674012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.674178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.674204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.674470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.674721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.674751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.674919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.675085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.675113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.675306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.675599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.675649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.675833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.676033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.676075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.676281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.676453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.676478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.676673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.676902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.676928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.677079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.677296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.677324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.677540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.677734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.677762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.677946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.678145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.678171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.678412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.678613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.678639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.678821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.679004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.679029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.679231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.679448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.679476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.679710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.679903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.679935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.680130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.680320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.680348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.680570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.680769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.680798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.680959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.681165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.681193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.681356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.681555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.681584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.681781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.681948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.681977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.682179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.682364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.682398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.682614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.682798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.682828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.682990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.683188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.683214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.683408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.683629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.683657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.683862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.684023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.684048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.684255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.684494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.684540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.684763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.684962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.684990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.685213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.685543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.685594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-05-12 07:06:52.685772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-05-12 07:06:52.685974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.686000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.686190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.686397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.686423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.686619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.686820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.686849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.687049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.687212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.687239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.687440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.687632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.687660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.687868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.688174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.688224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.688540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.688760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.688790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.689002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.689271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.689297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.689474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.689705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.689735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.689926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.690128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.690155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.690333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.690504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.690529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.690755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.690958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.690985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.691141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.691347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.691371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.691554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.691788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.691815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.692006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.692206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.692232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.692467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.692687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.692726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.692922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.693085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.693113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.693339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.693593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.693639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.693861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.694037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.694065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.694301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.694497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.694525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.694726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.694909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.694934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.695120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.695315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.695343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.695561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.695784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.695813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.696044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.696223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.696248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.696450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.696649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.696675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.696860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.697031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.697059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.697262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.697462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.697487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.697662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.697893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.697922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.698144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.698368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.698397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-05-12 07:06:52.698594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-05-12 07:06:52.698792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.698821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.699012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.699209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.699233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.699435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.699660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.699689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.699894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.700093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.700180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.700349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.700542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.700571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.700770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.700963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.700992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.701182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.701441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.701489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.701689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.701901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.701929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.702152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.702389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.702419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.702612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.702821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.702848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.702998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.703147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.703188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.703418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.703646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.703674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.703879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.704060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.704103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.704324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.704478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.704506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.704706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.704900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.704929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.705096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.705306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.705358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.705560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.705755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.705784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.705986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.706212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.706240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.706412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.706617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.706643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.706829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.707009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.707037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.707257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.707512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.707540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.707764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.708009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.708046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.708251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.708465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.708515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.708719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.708944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.708972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.709145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.709363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.709397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.709642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.709791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.709817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-05-12 07:06:52.709998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-05-12 07:06:52.710195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.710223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.710404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.710580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.710627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.710846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.711019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.711049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.711378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.711620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.711648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.711888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.712126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.712173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.712364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.712544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.712570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.712768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.712965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.712991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.713148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.713342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.713377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.713592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.713820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.713847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.714049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.714272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.714323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.714493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.714687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.714724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.714945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.715113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.715152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.715366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.715612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.715640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.715843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.716045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.716070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.716305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.716525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.716553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.716804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.717028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.717056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.717262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.717453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.717481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.717707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.717904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.717931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.718111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.718304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.718333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.718606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.718827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.718855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.719053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.719287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.719326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.719586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.719825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.719854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.720037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.720218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.720259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.720607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.720827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.720857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.721080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.721272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.721300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.721474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.721670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.721702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.721904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.722137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.722162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.722459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.722681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.722717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.722941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.723190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.723243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.723450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.723601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-05-12 07:06:52.723642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-05-12 07:06:52.723894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.724047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.724072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.724300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.724488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.724517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.724741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.724948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.724974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.725221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.725377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.725406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.725608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.725815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.725841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.726015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.726211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.726236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.726390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.726607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.726633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.726779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.726927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.726969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.727191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.727378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.727406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.727574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.727766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.727796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.727989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.728178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.728206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.728430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.728609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.728634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.728875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.729118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.729146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.729493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.729723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.729753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.729961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.730178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.730204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.730379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.730577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.730605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.730800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.730968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.730996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.731187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.731414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.731439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.731623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.731847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.731877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.732054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.732279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.732327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.732528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.732712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.732754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.732955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.733187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.733233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.733407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.733636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.733662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.733853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.734074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.734123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.734299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.734521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.734578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.734778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.734957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.734982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.735224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.735585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.735641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.735856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.736056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.736081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.736263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.736484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.736512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-05-12 07:06:52.736712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.736903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-05-12 07:06:52.736932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.737122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.737287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.737315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.737484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.737665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.737690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.737926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.738139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.738185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.738438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.738590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.738616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.738808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.739005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.739033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.739227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.739366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.739408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.739607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.739805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.739835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.740030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.740228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.740259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.740465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.740636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.740678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.740863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.741077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.741105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.741302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.741526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.741554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.741756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.741955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.741984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.742214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.742409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.742437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.742653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.742842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.742867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.743045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.743305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.743358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.743589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.743781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.743810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.744033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.744224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.744249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.744390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.744586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.744615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.744807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.745064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.745116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.745407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.745622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.745650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.745880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.746142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.746187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.746391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.746589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.746616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.746853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.747032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.747059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.747354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.747595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.747623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.747806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.748112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.748162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.748358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.748565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.748604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.748781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.749028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.749053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.749338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.749627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.749680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.749859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.750027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.750056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-05-12 07:06:52.750320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-05-12 07:06:52.750677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.750711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.750905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.751166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.751217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.751491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.751721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.751750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.751932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.752183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.752240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.752497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.752701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.752727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.752905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.753151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.753214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.753528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.753777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.753806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.754030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.754202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.754230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.754422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.754616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.754644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.754823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.755014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.755043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.755239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.755457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.755486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.755658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.755863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.755892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.756126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.756324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.756353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.756573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.756796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.756825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.756995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.757210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.757253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.757441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.757598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.757628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.757838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.757994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.758019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.758218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.758405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.758434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.758652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.758857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.758885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.759106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.759359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.759388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.759619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.759830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.759858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.760081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.760470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.760530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.760733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.760888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.760913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.761116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.761426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.761476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.761748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.762021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.762050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.762247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.762481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-05-12 07:06:52.762507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-05-12 07:06:52.762701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.762928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.762956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.763156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.763311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.763336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.763619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.763828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.763857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.764087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.764244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.764270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.764514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.764692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.764727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.764924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.765189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.765217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.765491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.765720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.765750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.765949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.766144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.766172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.766393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.766611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.766639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.766836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.767028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.767055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.767282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.767503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.767528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.767713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.767864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.767889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.768063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.768266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.768312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.768535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.768770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.768796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.769006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.769223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.769252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.769481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.769673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.769708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.769913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.770118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.770147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.770364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.770593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.770643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.770849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.771040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.771065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.771243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.771463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.771492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.771653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.771849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.771878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.772068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.772236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.772263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.772432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.772652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.772680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.772892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.773048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.773076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.773299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.773487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.773515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.773739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.773957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.773983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.774188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.774334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.774360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.774539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.774711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.774741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.774915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.775139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.775168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-05-12 07:06:52.775389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.775610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-05-12 07:06:52.775638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.775835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.775991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.776020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.776191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.776389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.776418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.776609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.776798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.776827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.777003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.777231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.777257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.777412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.777567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.777607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.777776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.777947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.777977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.778200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.778395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.778424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.778609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.778810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.778840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.779013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.779194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.779238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.779474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.779700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.779729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.780127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.780493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.780548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.780782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.781051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.781079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.781285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.781473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.781502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.781693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.781863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.781891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.782084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.782278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.782304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.782508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.782715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.782744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.782948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.783227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.783282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.783480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.783677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.783720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.783925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.784116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.784144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.784375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.784570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.784598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.784766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.784958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.784983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.785185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.785344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.785372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.785542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.785899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.785953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.786141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.786358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.786386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.786586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.786842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.786902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.787120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.787337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.787365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.787584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.787784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.787835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.788055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.788221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.788249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.788451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.788729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.788758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-05-12 07:06:52.788956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.789171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-05-12 07:06:52.789218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.789462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.789644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.789669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.789878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.790045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.790075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.790293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.790556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.790606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.790807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.791020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.791067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.791329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.791551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.791577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.791819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.792095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.792147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.792361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.792585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.792613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.792808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.793042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.793091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.793259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.793477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.793506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.793729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.793896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.793924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.794150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.794374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.794402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.794579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.794774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.794803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.795012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.795208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.795250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.795452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.795681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.795716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.795902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.796110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.796139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.796313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.796508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.796537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.796719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.796974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.796999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.797156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.797306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.797332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.797557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.797705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.797746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.797914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.798121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.798161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.798439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.798666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.798702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.798896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.799102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.799132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.799349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.799527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.799574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.800667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.800896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.800929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.801162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.801349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.801377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.801569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.801774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.801804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.802031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.802237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.802265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.802462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.802637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.802677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.802901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.803057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.803099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-05-12 07:06:52.803317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-05-12 07:06:52.803568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.803623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.803828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.803999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.804027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.804226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.804435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.804465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.804670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.804890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.804916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.805101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.805310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.805338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.805537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.805728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.805757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.805958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.806193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.806238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.806498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.806673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.806708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.806903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.807085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.807112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.807317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.807515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.807544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.807768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.807937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.807966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.808185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.808412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.808440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.808611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.808812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.808842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.809035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.809262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.809291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.809484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.809717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.809744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.809918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.810059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.810098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.810310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.810513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.810543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.810757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.810935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.810961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.811182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.811405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.811436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.811627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.811831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.811858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.812065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.812319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.812348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.812547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.812773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.812799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.812976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.813179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.813224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.813491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.813688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.813722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.813875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.814043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.814069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.814247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.814441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.814468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.814662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.814831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.814858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.815054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.815382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.815408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.815609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.815818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-05-12 07:06:52.815846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-05-12 07:06:52.816023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.816184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.816212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.816404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.816567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.816594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.816825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.817004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.817030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.817199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.817406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.817452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.817680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.817877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.817903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.818083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.818279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.818325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.818585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.818797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.818823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.819029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.819193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.819221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.819401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.819592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.819617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.819801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.819953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.819979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.820268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.820585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.820632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.820861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.821036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.821062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.821246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.821478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.821505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.821710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.821883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.821909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.822156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.822432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.822460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.822634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.822857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.822884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.823057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.823252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.823279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.823555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.823785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.823810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.823990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.824238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.824264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.824461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.824660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.824689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.824919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.825103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.825128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.825295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.825509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.825538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.825741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.825944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.825969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.826213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.826360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.826385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.826561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.826738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.826768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.826930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.827159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.827206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.827461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.827709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.827738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-05-12 07:06:52.827918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-05-12 07:06:52.828123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.828152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.828347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.828542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.828588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.828811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.828970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.828995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.829196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.829393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.829419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.829648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.829819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.829845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.829995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.830224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.830269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.830541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.830779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.830806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.830982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.831214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.831239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.831471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.831668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.831703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.831876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.832036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.832077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.832310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.832521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.832550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.832784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.832952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.832977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.833181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.833391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.833419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.833587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.833774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.833800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.833953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.834185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.834211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.835182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.835397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.835428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.835617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.835812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.835839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.835992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.836230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.836262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.836469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.836685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.836730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.836962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.837166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.837191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.837420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.837616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.837645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.837850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.838051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.838096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.838287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.838507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.838536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.838771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.838915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.838941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.839118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.839315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.839345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.839510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.839691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.839723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.839881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.840100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.840129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.840330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.840530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.840557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.840745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.840905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.840930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.841100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.841301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.841330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-05-12 07:06:52.841501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-05-12 07:06:52.841705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.841735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.841944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.842182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.842211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.842400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.842596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.842625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.842828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.843021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.843049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.843274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.843475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.843503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.843725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.843955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.843983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.844147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.844359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.844406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.844612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.844814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.844839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.845036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.845218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.845261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.845432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.845662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.845688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.845863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.846064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.846089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.846265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.846430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.846459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.846666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.846875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.846903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.847075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.847312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.847358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.847579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.847778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.847804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.847979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.848198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.848226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.848426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.848648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.848676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.848850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.849015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.849043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.849263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.849477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.849525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.849718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.849870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.849911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.850118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.850292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.850318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.850517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.850703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.850732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.850902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.851102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.851131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.851310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.851492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.851518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.851710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.851888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.851915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.852116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.852315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.852348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.852567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.852722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.852748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.852944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.853169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.853198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.853420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.853617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.853646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.853850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.854053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-05-12 07:06:52.854081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-05-12 07:06:52.854306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.854505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.854533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.854734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.854935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.854961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.855164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.855397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.855424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.855611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.855797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.855827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.856075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.856271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.856300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.856504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.856654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.856680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.856862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.857087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.857116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.857322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.857501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.857526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.857679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.857892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.857917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.858080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.858252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.858281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.858487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.858656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.858683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.858872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.859047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.859075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.859308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.859479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.859506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.859710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.859897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.859922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.860114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.860292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.860333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.860533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.860690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.860740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.860920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.861130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.861158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.861330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.861503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.861530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.861774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.861926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.861950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.862171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.862359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.862386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.862614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.862788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.862815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.862966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.863157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.863184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.863381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.863554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.863582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.863787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.863989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.864014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.864179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.864349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.864377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.864632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.864805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.864830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.865009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.865220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.865264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.865428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.865592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.865621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.865830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.866001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.866027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-05-12 07:06:52.866201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.866410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-05-12 07:06:52.866437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.866630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.866823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.866849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.867018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.867239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.867267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.867438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.867658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.867682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.867846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.867994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.868017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.868197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.868362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.868389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.868559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.868777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.868804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.868952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.869186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.869214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.869423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.869643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.869670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.869880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.870036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.870068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.870220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.870427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.870459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.870671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.870865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.870889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.871093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.871249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.871278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.871471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.871646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.871673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.871869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.872015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.872039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.872231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.872402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.872429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.872646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.872826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.872851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.873013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.873244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.873272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.873463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.873639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.873664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.873825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.874022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.874049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.874239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.874394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.874422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.874630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.874828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.874853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.875051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.875249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.875276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.875487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.875712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.875755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.875911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.876108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.876134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.876378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.876581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.876610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-05-12 07:06:52.876819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.877017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-05-12 07:06:52.877044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.877214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.877380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.877408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.877629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.877830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.877855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.878057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.878223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.878251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.878445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.878617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.878645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.878832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.879005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.879049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.879230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.879385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.879424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.879625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.879831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.879857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.880026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.880226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.880249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.880401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.880581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.880606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.880800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.880946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.880969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.881176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.881388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.881415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.881648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.881834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.881859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.882028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.882258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.882285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.882553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.882727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.882768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.882953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.883128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.883156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.883325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.883528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.883555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.883770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.883947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.883989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.884201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.884348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.884391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.884568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.884800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.884826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.885016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.885185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.885214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.885383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.885609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.885637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.885814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.885989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.886013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.886189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.886424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.886464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.886688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.886856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.886880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.887024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.887183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.887208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.887430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.887644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.887669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.887836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.888037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.888065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.888241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.888431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.888459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.888675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.888837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.888861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-05-12 07:06:52.889051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-05-12 07:06:52.889262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-05-12 07:06:52.889290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-05-12 07:06:52.889514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-05-12 07:06:52.889708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-05-12 07:06:52.889751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-05-12 07:06:52.889925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-05-12 07:06:52.890093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-05-12 07:06:52.890122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-05-12 07:06:52.890293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-05-12 07:06:52.890498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-05-12 07:06:52.890527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-05-12 07:06:52.890718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-05-12 07:06:52.890865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-05-12 07:06:52.890889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-05-12 07:06:52.891065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.891263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.891303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.891499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.891648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.891671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.891894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.892042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.892067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.892263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.892415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.892440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.892591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.892788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.892813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.892984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.893179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.893207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.893382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.893574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.893601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.893803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.893975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.894018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.894215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.894452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.894477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.894677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.894884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.894909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.895117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.895321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.895352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.895543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.895777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.895805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.895996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.896204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.896227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.896418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.896616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.896643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.896836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.896992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.897017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.897198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.897392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.897421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.897608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.897809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.897837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.898012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.898203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.898230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.898429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.898609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.898634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.898794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.898975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.899024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.899222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.899392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.899420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.899584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.899811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.899837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.899991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.900169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.900195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.900370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.900536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.900564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.900757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.900960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.900985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-05-12 07:06:52.901191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-05-12 07:06:52.901416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.901443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.901672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.901826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.901851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.902027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.902179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.902203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.902400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.902567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.902594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.902770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.902936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.902962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.903174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.903347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.903373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.903581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.903794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.903819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.904018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.904181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.904209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.904403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.904604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.904631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.904820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.904971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.905012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.905194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.905380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.905404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.905589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.905762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.905790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.905984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.906176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.906201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.906378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.906567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.906594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.906767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.906946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.906988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.907197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.907346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.907371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.907554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.907769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.907795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.907987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.908172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.908199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.908444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.908622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.908647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.908838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.908990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.909043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.909212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.909381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.909407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.909577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.909791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.909817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.909998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.910231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.910258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.910451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.910629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.910656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.910876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.911049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.911076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.911312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.911505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.911533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.911776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.911955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.911978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.912190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.912388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.912417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.912615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.912821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.912846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.913053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.913264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.913290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.913490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.913639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.913664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.913852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.914000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.914024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.914180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.914338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.914362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.914523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.914725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.914751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.914910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.915073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.915101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.915271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.915466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.915494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.915717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.915911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.915942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.916149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.916370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.916404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.916570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.916801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.916830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.917005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.917199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.917227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.917451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.917637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.917667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.917841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.918015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.918043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.918263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.918439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.918467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.918640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.918836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.918860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.919010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.919218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.919262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.919485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.919656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.919682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.919857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.920007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.920048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.920266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.920424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.920448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.920625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.920834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.920860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.921032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.921233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.921257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.921408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.921601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.921628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.921803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.921950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.921975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.922188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.922388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.922416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.922594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.922787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.922814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.923014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.923173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.923201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.923436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.923632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.923660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.923873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.924070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.924097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.924288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.924458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.924486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.924659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.924819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.924845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.925002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.925180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.925206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.925384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.925572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.925599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.925817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.925986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.926013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.926181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.926327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.926366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-05-12 07:06:52.926588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-05-12 07:06:52.926788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.926812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.926990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.927197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.927225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.927432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.927584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.927609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.927815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.927973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.927997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.928193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.928363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.928391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.928552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.928748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.928775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.928980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.929143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.929184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.929367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.929550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.929575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.929795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.929947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.929972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.930155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.930343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.930369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.930543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.930693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.930754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.930928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.931127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.931151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.931328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.931492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.931520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.931733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.931893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.931919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.932138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.932345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.932368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.932533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.932726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.932755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.932954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.933140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.933166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.933364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.933529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.933557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.933745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.933939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.933967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.934138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.934312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.934336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.934488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.934659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.934682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.934835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.935033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.935064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.935281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.935448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.935475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.935673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.935869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.935895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.936069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.936263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.936294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.936496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.936645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.936685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.936873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.937045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.937073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.937269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.937459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.937487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.937677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.937879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.937904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.938083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.938249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.938275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.938452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.938618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.938644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.938862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.939018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.939060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.939237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.939426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.939453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.939648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.939828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.939853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.940013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.940194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.940219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.940373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.940548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.940575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.940758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.940902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.940928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.941111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.941308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.941336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.941507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.941687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.941719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.941894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.942119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.942147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-05-12 07:06:52.942342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.942537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-05-12 07:06:52.942564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.942759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.942993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.943018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.943189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.943387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.943414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.943614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.943823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.943851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.944050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.944231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.944255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.944441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.944633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.944660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.944899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.945097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.945125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.945321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.945513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.945540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.945750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.945919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.945947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.946147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.946304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.946331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.946501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.946677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.946708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.946861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.947013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.947039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.947261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.947419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.947462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.947655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.947829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.947854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.948020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.948218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.948243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.948427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.948620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.948647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.948832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.949034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.949062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.949242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.949417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.949458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.949623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.949832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.949858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.950032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.950209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.950234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.950417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.950614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.950642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.950834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.951054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.951082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-05-12 07:06:52.951238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.951400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-05-12 07:06:52.951427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.951630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.951816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.951841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.952067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.952260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.952288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.952486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.952640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.952665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.952849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.953049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.953076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.953282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.953430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.953456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.953608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.953854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.953879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.954084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.954283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.954311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.954508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.954671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.954722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.954903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.955052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.955094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.955293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.955460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.955489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.955740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.955917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.955943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.956109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.956296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.956324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.956549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.956774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.956810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.957035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.957238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.957266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.957436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.957610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.957637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.957870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.958035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.958062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.958259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.958419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.958444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.958624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.958822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.958848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.959002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.959145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.959169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.959338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.959540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.959565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.959745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.959898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.959921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.960186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.960382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.960411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.960620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.960800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.960828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.961088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.961281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.961308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.961505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.961731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.961758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.961958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.962150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.962178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.962348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.962518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.962544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.962738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.962939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.962963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.963163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.963366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.963393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.963566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.963763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.963791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.963983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.964204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.964229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.964386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.964531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.964555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.964745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.964894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.964920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.965097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.965322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.965347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.965543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.965737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.965765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.965989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.966164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.966192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.966388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.966588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.966615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.966824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.967021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.967048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.967243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.967445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.967468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.967612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.967795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.967824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.968000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.968175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.968199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.968403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.968596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.968624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.968810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.968959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.968983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.969195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.969414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.969441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.969629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.969780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.969805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.969954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.970163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.970191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.970372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.970549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.970592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.970778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.970924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.970948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.971106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.971257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.971280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.971482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.971674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.971710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.971891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.972077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.972102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.972341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.972483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.972507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.972716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.972910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.972935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.973119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.973339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.973365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.973557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.973733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.973758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.973960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.974118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.974144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.974313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.974532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.974560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.974730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.974914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.974938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.975096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.975310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.975337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.975523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.975669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.975692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.975881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.976024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.976065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.976236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.976456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.976484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.976676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.976847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.976875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.977046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.977246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.977274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-05-12 07:06:52.977453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.977627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-05-12 07:06:52.977652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.977839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.978060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.978087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.978290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.978485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.978511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.978689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.978851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.978877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.979060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.979278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.979303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.979511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.979741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.979768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.979944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.980126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.980154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.980343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.980548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.980588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.980766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.980938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.980981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.981182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.981395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.981427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.981621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.981806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.981831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.982009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.982173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.982197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.982433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.982628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.982655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.982855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.983024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.983050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.983258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.983424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.983452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.983630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.983810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.983837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.984015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.984194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.984222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.984419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.984588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.984615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.984824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.984981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.985006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.985196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.985361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.985388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.985562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.985729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.985770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.985927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.986150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.986177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.986402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.986613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.986655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.986826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.986995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.987020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.987173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.987371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.987398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.987568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.987746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.987772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.987948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.988140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.988166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.988363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.988569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.988596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.988801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.988942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.988966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.989162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.989357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.989387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.989666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.989848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.989873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.990067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.990261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.990289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.990481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.990670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.990704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.990881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.991026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.991051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.991286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.991570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.991597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.991769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.991951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.991991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.992188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.992410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.992437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.992635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.992851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.992876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.993058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.993327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.993354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.993622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.993799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.993824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.993972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.994115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.994139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.994290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.994463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.994490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.994659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.994847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.994872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.995069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.995269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.995296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.995516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.995722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.995766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.995927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.996105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.996129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.996326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.996483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.996510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.996746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.996900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.996925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.997082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.997259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.997286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.997486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.997680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.997715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.997886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.998080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.998108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.998377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.998570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.998598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.998815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.998973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.998999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.999209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.999478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.999506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:52.999790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.999945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:52.999970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:53.000181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.000376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.000404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:53.000669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.000851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.000877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:53.001053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.001275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.001303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:53.001481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.001670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.001703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:53.001902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.002093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.002120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:53.002312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.002510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.002541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-05-12 07:06:53.002767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-05-12 07:06:53.002912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.002937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.003134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.003306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.003333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.003524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.003691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.003744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.003929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.004090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.004114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.004319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.004514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.004541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.004774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.004995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.005023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.005193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.005389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.005417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.005608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.005812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.005837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.005993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.006180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.006222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.006379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.006542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.006569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.006747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.006898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.006923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.007097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.007289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.007317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.007513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.007712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.007755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.007940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.008140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.008167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.008341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.008532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.008559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.008769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.008971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.009012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.009239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.009472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.009512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.009718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.009920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.009945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.010153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.010344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.010371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.010566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.010747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.010771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.010973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.011178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.011206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.011378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.011555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.011582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.011771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.011948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.011989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.012164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.012343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.012385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.012582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.012789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.012815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.013011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.013185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.013213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.013412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.013579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.013606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.013814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.013963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.014003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.014240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.014407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.014447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.014609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.014818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.014843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.015045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.015244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.015268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.015453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.015641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.015668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.015872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.016067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.016095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.016257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.016424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.016451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.016625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.016817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.016843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.017047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.017276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.017304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.017529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.017719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.017762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.017917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.018152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.018180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.018377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.018540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.018566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.018762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.018943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.018967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.019177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.019394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.019419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.019623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.019847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.019874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.020046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.020237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.020265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.020432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.020627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.020653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.020867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.021059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.021087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.021294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.021470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.021511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.021685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.021848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.021874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.022118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.022318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.022347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.022543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.022718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.022759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.022918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.023096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.023121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.023336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.023531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.023562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.023743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.023899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.023924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.024088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.024290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.024314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.024492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.024667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.024694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.024870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.025038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.025064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.025266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.025442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.025466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.025660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.025867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.025893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.026121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.026285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.026312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.026510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.026728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.026754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.026924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.027128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.027156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.027348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.027563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.027590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.027760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.028010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.028038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.028241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.028393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.028434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.028634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.028814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.028840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.028994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.029166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.029206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.029377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.029566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.029593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.029775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.029958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.029985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.030146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.030298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.030339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.030544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.030722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.030750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-05-12 07:06:53.030919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.031083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-05-12 07:06:53.031110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.031308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.031450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.031490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.031688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.031886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.031909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.032112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.032312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.032340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.032531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.032690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.032724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.032917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.033118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.033143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.033322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.033498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.033537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.033755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.033927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.033956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.034131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.034326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.034353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.034535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.034711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.034754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.034930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.035138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.035163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.035345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.035575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.035600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.035809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.036028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.036052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.036226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.036416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.036444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.036612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.036798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.036826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.037104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.037295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.037322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.037493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.037688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.037726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.037895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.038094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.038120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.038299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.038447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.038489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.038661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.038867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.038892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.039157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.039331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.039360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.039541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.039694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.039736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.039891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.040061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.040088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.040259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.040475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.040502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.040706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.040905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.040933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.041128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.041298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.041327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.041527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.041753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.041782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.042010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.042196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.042223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.042388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.042568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.042596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.042784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.042953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.042978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.043152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.043304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.043328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.043501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.043654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.043679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.043861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.044010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.044056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.044249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.044421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.044446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.044596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.044818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.044846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.045126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.045394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.045421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.045616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.045799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.045826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.046021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.046256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.046281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.046487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.046709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.046737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.046936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.047120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.047148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.047349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.047616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.047663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.047849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.048017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.048044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.048213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.048365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.048410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.048586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.048783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.048807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.049008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.049178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.049206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.049407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.049600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.049627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.049810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.049956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.049981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.050162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.050345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.050371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.050536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.050704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.050734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.050929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.051151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.051178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.051375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.051642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.051669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.051858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.052007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.052049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.052215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.052402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.052429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.052634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.052816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.052841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.053053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.053252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.053280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.053484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.053651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.053678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.053899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.054044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.054085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.054288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.054451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.054478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-05-12 07:06:53.054721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-05-12 07:06:53.054892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.054917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.055119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.055338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.055366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.055520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.055720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.055746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.055919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.056112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.056139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.056308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.056486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.056528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.056709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.056878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.056907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.057078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.057269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.057296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.057473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.057625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.057650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.057817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.058041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.058068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.058228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.058396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.058425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.058660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.058851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.058876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.059060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.059212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.059237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.059443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.059644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.059671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.059912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.060135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.060185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.060379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.060587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.060612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.060767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.060929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.060954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.061168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.061366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.061393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.061586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.061776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.061804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.062024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.062191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.062218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.062390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.062563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.062591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.062771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.062934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.062961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.063153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.063316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.063344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.063532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.063703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.063730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.063928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.064126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.064151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.064325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.064528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.064554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.064788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.064950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.064990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.065183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.065378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.065405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.065596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.065786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.065814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.065992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.066168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.066193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.066393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.066613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.066640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.066838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.067019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.067044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.067233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.067413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.067437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.067583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.067764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.067789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.067953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.068166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.068194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.068384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.068568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.068595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.068796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.068993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.069024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.069254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.069429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.069457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.069651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.069853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.069879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.070036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.070203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.070231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.070450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.070642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.070669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.070856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.071003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.071027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.071173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.071316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.071340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.071531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.071707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.071736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.071943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.072142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.072166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.072319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.072547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.072575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.072782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.072934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.072958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.073173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.073389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.073417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.073586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.073804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.073831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.074000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.074181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.074206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.074374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.074572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.074596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.074771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.075000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.075024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.075196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.075409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.075434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.075581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.075724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.075750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.075901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.076080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.076121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.076328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.076496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.076523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.076728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.076948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.076976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.077194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.077346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.077372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.077521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.077737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.077765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.077934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.078158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.078185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.078354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.078546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.078573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.078767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.078917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.078943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.079115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.079334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.079359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.079507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.079683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.079736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.079923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.080096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-05-12 07:06:53.080123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-05-12 07:06:53.080323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.080486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.080513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.080692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.080922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.080947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.081076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16674b0 is same with the state(5) to be set 00:26:46.072 [2024-05-12 07:06:53.081358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.081573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.081604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.081782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.081944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.081971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.082181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.082362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.082387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.082565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.082748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.082774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.082951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.083121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.083146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.083352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.083493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.083518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.083669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.083874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.083899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.084060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.084240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.084265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.084446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.084597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.084624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.084781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.084933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.084958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.085168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.085343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.085368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.085578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.085724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.085750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.085932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.086076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.086101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.086251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.086446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.086471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.086624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.086816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.086842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.086987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.087138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.087163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.087342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.087517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.087542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.087727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.087912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.087939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.088115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.088321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.088346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.088575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.088779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.088805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.088993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.089179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.089206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.089359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.089538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.089563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.089740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.089891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.089916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.090071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.090251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.090277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.090454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.090606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.090631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.090792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.090941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.090966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.091143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.091326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.091351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.091504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.091683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.091714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.091898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.092075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.092100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.092283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.092453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.092478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.092671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.092854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.092880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.093032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.093213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.093239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.093421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.093589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.093615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.093789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.093970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.093995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.094170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.094315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.094340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.094493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.094671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.094703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.094856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.095027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.095052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.095228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.095402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.095427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.095612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.095775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.095801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.095953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.096100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.096125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.096304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.096502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.096527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.096711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.096863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.096890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.097066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.097211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.097236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.097388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.097536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.097562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.097712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.097866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.097892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.098050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.098201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.098243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.098428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.098608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.098635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.098818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.098970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.098996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.099170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.099318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.099344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.099520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.099703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.099729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.099940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.100120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.100146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.100320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.100472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.100497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.100672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.100821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.100846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.101032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.101186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.101211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.101360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.101507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.101532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.101744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.101927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.101953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.102136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.102335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.102360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.102520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.102707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.102733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-05-12 07:06:53.102886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-05-12 07:06:53.103065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.103090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.103275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.103422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.103447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.103628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.103807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.103832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.103989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.104168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.104193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.104341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.104496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.104522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.104720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.104875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.104901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.105075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.105250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.105275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.105452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.105631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.105656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.105845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.106023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.106048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.106250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.106430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.106455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.106633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.106787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.106812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.106988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.107170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.107195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.107398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.107552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.107578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.107769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.107919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.107946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.108103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.108254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.108281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.108461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.108610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.108635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.108819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.108993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.109018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.109196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.109372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.109397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.109550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.109703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.109728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.109911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.110065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.110090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.110267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.110448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.110475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.110630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.110805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.110832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.111009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.111191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.111216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.111407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.111554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.111578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.111729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.111908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.111933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.112088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.112260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.112285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.112428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.112574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.112600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.112795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.112973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.112998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.113151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.113327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.113352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.113503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.113679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.113711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.113890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.114065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.114091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.114253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.114431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.114456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.114605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.114764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.114790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.114968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.115147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.115172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.115322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.115469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.115493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.115647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.115826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.115852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.116034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.116182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.116208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.116359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.116568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.116593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.116767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.116944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.116969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.117126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.117278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.117303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.117486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.117661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.117686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.117842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.117998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.118023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.118203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.118380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.118405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.118560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.118741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.118767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.118953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.119093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.119119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.119268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.119422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.119447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.119651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.119802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.119828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.120011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.120158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.120183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.120355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.120558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.120583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.120760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.120935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.120960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.121166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.121313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.121353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.121536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.121804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.121830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.121982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.122148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.122177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.122376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.122585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.122610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.122762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.122938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.122964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.123149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.123311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.123350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.123585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.123733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.123759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.123907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.124191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.124215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.124525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.124730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.124773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.124924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.125083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.125123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.125302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.125498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.125524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.125812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.125967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.125994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.126166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.126314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.126357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.126542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.126778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.126804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.126960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.127250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.127274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.073 [2024-05-12 07:06:53.127493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.127655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.073 [2024-05-12 07:06:53.127683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.073 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.127894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.128100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.128126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.128278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.128433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.128460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.128635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.128820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.128846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.129010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.129312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.129352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.129585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.129781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.129810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.130038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.130241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.130269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.130444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.130619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.130648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.130844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.131024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.131049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.131228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.131483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.131523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.131759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.131913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.131939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.132193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.132351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.132376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.132521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.132705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.132730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.132908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.133059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.133084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.133283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.133491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.133517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.133694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.133861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.133887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.134068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.134224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.134248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.134433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.134638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.134667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.134850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.135027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.135053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.135258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.135483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.135507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.135715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.135889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.135915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.136124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.136296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.136322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.136462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.136643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.136668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.136871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.137050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.137075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.137245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.137387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.137412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.137567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.137743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.137769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.137959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.138177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.138202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.138406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.138547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.138572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.138816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.138961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.138988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.139249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.139457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.139486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.139679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.139883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.139913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.140200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.140433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.140461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.140657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.140894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.140920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.141122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.141296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.141322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.141541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.141725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.141751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.141902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.142136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.142161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.142352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.142545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.142570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.142822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.142989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.143013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.143222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.143405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.143445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.143657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.143848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.143874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.144025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.144174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.144199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.144348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.144613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.144652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.144893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.145072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.145098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.145340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.145503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.145526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.145665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.145854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.145879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.146063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.146205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.146231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.146414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.146614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.146638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.146819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.146981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.147006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.147209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.147405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.147431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.147656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.147882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.147911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.148081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.148266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.148305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.148524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.148675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.148717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.148900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.149087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.149111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.149320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.149492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.149517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.149686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.149870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.149895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.150082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.150226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.150251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.150419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.150639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.150664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.150853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.151060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.151085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.151242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.151423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.074 [2024-05-12 07:06:53.151449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.074 qpair failed and we were unable to recover it. 00:26:46.074 [2024-05-12 07:06:53.151705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.151883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.151909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.152115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.152278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.152303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.152498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.152688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.152727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.152909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.153101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.153126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.153327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.153506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.153532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.153724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.153936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.153962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.154144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.154397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.154422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.154624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.154801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.154830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.155001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.155161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.155186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.155376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.155618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.155642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.155868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.156074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.156099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.156328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.156575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.156599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.156787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.156940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.156965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.157111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.157347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.157372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.157535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.157753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.157779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.158023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.158215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.158243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.158460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.158670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.158700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.158881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.159051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.159075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.159230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.159428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.159453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.159647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.159835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.159861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.160028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.160203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.160228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.160486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.160704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.160747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.161010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.161263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.161290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.161493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.161663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.161711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.161926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.162084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.162110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.162293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.162500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.162526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.162715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.162922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.162946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.163194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.163359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.163384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.163597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.163773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.163798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac9c000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.163975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.164254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.164286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.164502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.164711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.164747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.164934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.165093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.165117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.165360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.165551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.165592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.165813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.165964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.166006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.166243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.166490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.166532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.166718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.166920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.166967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.167204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.167446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.167472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.167707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.167914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.167956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.168164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.168390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.168433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.168641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.168873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.168916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.169160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.169369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.169411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.169621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.169842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.169888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.170095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.170256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.170281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.170494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.170716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.170756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.171024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.171271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.171312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.171521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.171708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.171734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.171934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.172130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.172171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.172409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.172616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.172639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.172819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.173027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.173070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.173283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.173483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.173525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.173728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.173931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.173975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.174211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.174419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.174460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.174651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.174838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.174864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.175133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.175351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.175392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.175569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.175784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.175808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.176010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.176314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.176356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.176566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.176807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.176832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.177012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.177190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.177215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.177431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.177682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.177730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.177953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.178158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.178187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.178431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.178594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.178619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.178854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.179039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.179082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.075 qpair failed and we were unable to recover it. 00:26:46.075 [2024-05-12 07:06:53.179287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.179498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.075 [2024-05-12 07:06:53.179522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.076 qpair failed and we were unable to recover it. 00:26:46.076 [2024-05-12 07:06:53.179813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.180024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.180064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.076 qpair failed and we were unable to recover it. 00:26:46.076 [2024-05-12 07:06:53.180312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.180541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.180566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.076 qpair failed and we were unable to recover it. 00:26:46.076 [2024-05-12 07:06:53.180795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.180991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.181031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.076 qpair failed and we were unable to recover it. 00:26:46.076 [2024-05-12 07:06:53.181210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.181433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.181475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.076 qpair failed and we were unable to recover it. 00:26:46.076 [2024-05-12 07:06:53.181677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.181893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.181919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.076 qpair failed and we were unable to recover it. 00:26:46.076 [2024-05-12 07:06:53.182134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.182391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.182419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.076 qpair failed and we were unable to recover it. 00:26:46.076 [2024-05-12 07:06:53.182636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.182886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.182929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.076 qpair failed and we were unable to recover it. 00:26:46.076 [2024-05-12 07:06:53.183128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.183323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.183366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.076 qpair failed and we were unable to recover it. 00:26:46.076 [2024-05-12 07:06:53.183585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.183784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.183828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.076 qpair failed and we were unable to recover it. 00:26:46.076 [2024-05-12 07:06:53.184064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.184279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.076 [2024-05-12 07:06:53.184321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.076 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.184530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.184810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.184852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.185065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.185341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.185384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.185550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.185808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.185851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.186026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.186276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.186318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.186523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.186731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.186756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.187040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.187285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.187327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.187527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.187741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.187768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.187987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.188232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.188260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.188482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.188734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.188762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.188926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.189121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.189163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.189372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.189611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.189635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.189851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.190063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.190106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.190471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.190704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.190730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.190908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.191111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.191154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.191388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.191634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.191661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.191867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.192076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.192118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.192322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.192538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.192585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.192780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.193057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.193100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.193337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.193524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.193550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.193756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.193981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.194026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.194232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.194486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.194511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.194701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.194866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.194890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.195033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.195235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.195278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.195451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.195622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.195646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.195850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.196034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.196076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.196323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.196542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.196567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.196833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.197089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.197137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.197340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.197562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.197586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.197791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.198022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.198064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.198287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.198512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.198558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.198757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.199036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.199078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.199319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.199545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.199569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.199782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.200007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.200051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.200263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.200542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.200585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.200826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.201019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.201061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.201243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.201413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.201438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.201591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.201813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.201844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.202047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.202266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.202307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.202510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.202739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.202779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.203038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.203218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.203260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.203398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.203604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.203629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.203917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.204140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.204183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.204396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.204553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.204578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.204788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.205004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.205046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.205209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.205422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.205448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.205658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.205864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.205908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.206106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.206396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.206437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.206647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.206867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.206910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.207101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.207344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.207388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-05-12 07:06:53.207575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.207809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-05-12 07:06:53.207858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.208042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.208287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.208314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.208515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.208686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.208720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.208922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.209145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.209187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.209455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.209656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.209702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.209881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.210158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.210200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.210400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.210685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.210717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.210948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.211254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.211296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.211533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.211706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.211732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.211914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.212120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.212162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.212393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.212560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.212586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.212795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.213025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.213066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.213233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.213452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.213494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.213710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.213958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.214001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.214186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.214393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.214436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.214646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.214849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.214893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.215144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.215364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.215410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.215597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.215886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.215929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.216159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.216357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.216398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.216578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.216792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.216835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.217107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.217290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.217332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.217483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.217639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.217679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.217908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.218154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.218197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.218436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.218609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.218634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.218841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.219036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.219078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.219227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.219427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.219455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.219615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.219829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.219872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.220114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.220299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.220327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.220537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.220786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.220813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.221025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.221229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.221272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.221468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.221673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.221701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.222000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.222243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.222271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.222530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.222703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.222730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.222965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.223189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.223233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.223473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.223673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.223713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.223895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.224104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.224145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.224355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.224599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.224638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.224865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.225220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.225281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.225497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.225739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.225780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.225989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.226407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.226457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.226631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.226830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.226857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.227152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.227407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.227449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.227638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.227840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.227866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.228068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.228281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.228323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.228561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.228740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.228765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.229011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.229272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.229313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.229517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.229724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.229750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.229987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.230228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.230257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.230493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.230717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.230742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.230975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.231201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.231244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.231443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.231684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.231715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.231897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.232130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.232158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.232349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.232597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.232639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.232853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.233050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.233093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.233260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.233450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.233475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.233626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.233868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.233911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.234143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.234350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.234392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-05-12 07:06:53.234593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-05-12 07:06:53.234777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.234820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.235060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.235346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.235389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.235604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.235834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.235880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.236124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.236367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.236394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.236620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.236832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.236858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.237032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.237262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.237287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.237428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.237625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.237650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.237867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.238149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.238192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.238429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.238630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.238654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.238856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.239105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.239149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.239357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.239588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.239612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.239846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.240127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.240169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.240514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.240761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.240786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.240988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.241329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.241370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.241618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.241818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.241844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.242081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.242298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.242341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.242517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.242750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.242776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.242984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.243214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.243258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.243437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.243719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.243746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.243946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.244172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.244215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.244424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.244691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.244738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.244895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.245104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.245147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.245342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.245583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.245624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.245845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.246080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.246109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.246325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.246569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.246594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.246816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.247063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.247106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.247337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.247569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.247594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.247773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.247983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.248024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.248193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.248437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.248479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.248665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.248924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.248965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.249170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.249390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.249436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.249683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.249883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.249909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.250076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.250313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.250355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.250599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.250851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.250877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.251108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.251308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.251351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.251560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.251791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.251835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.252047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.252311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.252353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.252543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.252717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.252743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.253046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.253230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.253273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.253480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.253666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.253690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.253913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.254224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.254268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.254479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.254720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.254745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.254996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.255185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.255229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.255435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.255636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.255662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.255890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.256110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.256153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.256368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.256569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.256594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.256916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.257128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.257170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.257374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.257611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.257635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.257812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.258016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.258057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.258289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.258506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.258550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-05-12 07:06:53.258759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.258959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-05-12 07:06:53.259001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.259215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.259409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.259450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.259630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.259866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.259907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.260114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.260357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.260383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.260583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.260754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.260797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.261034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.261250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.261293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.261483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.261736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.261761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.261933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.262125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.262167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.262402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.262651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.262690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.262887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.263082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.263124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.263333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.263571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.263613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.263800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.263992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.264025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.264222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.264509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.264535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.264755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.264994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.265036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.265283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.265559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.265602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.265803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.266004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.266046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.266261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.266503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.266531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.266749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.266940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.266982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.267223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.267433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.267474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.267691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.267932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.267976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.268171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.268388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.268430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.268624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.268796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.268828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.268996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.269236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.269265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.269512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.269736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.269761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.269939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.270187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.270229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.270423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.270605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.270630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.270831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.271033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.271074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.271318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.271527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.271569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.271756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.271996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.272037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.272238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.272430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.272473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.272659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.272875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.272901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.273113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.273410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.273457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.273688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.273900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.273926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.274103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.274326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.274369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.274551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.274789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.274832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.275062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.275267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.275308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.275518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.275704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.275730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.275940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.276162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.276205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.276417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.276655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.276679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.276905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.277115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.277157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.277351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.277669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.277693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.277920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.278101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.278148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.278355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.278560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.278585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.278842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.279065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.279108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.279311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.279500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.279524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.279711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.279969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.280011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.280214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.280496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.280538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.280753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.280949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.280997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.281201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.281545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.281587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.281776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.281958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.281999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.282200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.282440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.282468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.282673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.282885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.282910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.283128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.283377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.283419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.283615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.283786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.283812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.284078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.284295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.284338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.284527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.284689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.284719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.285083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.285352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.285397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-05-12 07:06:53.285615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.285843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-05-12 07:06:53.285869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.286053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.286274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.286317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.286524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.286712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.286739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.287046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.287261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.287305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.287511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.287708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.287735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.287956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.288252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.288294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.288541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.288722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.288763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.288938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.289131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.289173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.289405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.289595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.289619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.289813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.290123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.290166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.290392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.290604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.290629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.290831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.291037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.291080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.291310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.291512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.291537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.291743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.291967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.291992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.292218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.292499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.292542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.292769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.292997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.293040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.293271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.293464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.293507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.293747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.293954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.293996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.294232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.294439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.294481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.294661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.294843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.294868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.295088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.295311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.295353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.295561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.295760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.295791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.295979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.296195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.296238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.296474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.296680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.296725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.297000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.297198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.297241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.297414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.297645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.297669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.297926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.298138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.298180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.298379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.298573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.298597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.298833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.299020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.299047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.299310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.299507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.299532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.299707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.299893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.299919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.300100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.300315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.300358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.300538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.300713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.300740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.300975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.301188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.301241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.301418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.301611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.301636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.301782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.301981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.302023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.302286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.302513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.302539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.302701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.302881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.302906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.303115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.303282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.303309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.303514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.303717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.303744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.303978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.304206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.304255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.304438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.304615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.304640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.304925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.305113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.305155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.305357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.305526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.305551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.305810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.305974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.306000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.306224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.306379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.306405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.306583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.306755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.306791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.307011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.307289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.307335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.307511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.307701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.307727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.307907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.308164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.308210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.308406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.308630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.308654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.308852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.309130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.309177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.309380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.309572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-05-12 07:06:53.309596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-05-12 07:06:53.309781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.309973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.310013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.310256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.310461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.310497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.310684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.310912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.310956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.311191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.311454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.311501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.311681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.311903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.311946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.312151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.312381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.312407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.312579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.312810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.312854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.313047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.313263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.313306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.313483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.313655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.313681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.313896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.314140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.314183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.314384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.314576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.314601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.314767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.314957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.315000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.315173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.315394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.315436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.315614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.315811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.315854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.316037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.316258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.316300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.316494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.316660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.316684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.316934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.317179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.317213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.317424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.317627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.317652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.317862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.318049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.318091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.318302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.318489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.318532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.318717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.318865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.318892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.319109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.319341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.319370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.319565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.319777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.319821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.320026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.320242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.320285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.320442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.320591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.320616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.320822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.321054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.321117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.321344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.321511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.321536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.321718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.321901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.321943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.322118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.322309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.322350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.322524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.322677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.322710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.322890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.323163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.323205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.323448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.323611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.323637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.323789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.323963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.324006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.324178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.324364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.324408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.324591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.324801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.324847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.325032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.325252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.325294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.325440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.325648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.325673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.325868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.326055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.326098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.326332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.326526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.326552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.326738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.326942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.326970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.327172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.327376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.327404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.327570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.327740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.327768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.327963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.328167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.328210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.328415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.328594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.328620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.328826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.329016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.329059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.329260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.329451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.329476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.329627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.329810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.329854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.330034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.330246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.330289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.330469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.330627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.330654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.330872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.331090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.331133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.331372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.331569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.331594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.331770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.331964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.332007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.332188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.332396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.332422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.332604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.332817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.332861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.333044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.333297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.333340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.333492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.333673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.333703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-05-12 07:06:53.333884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.334077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-05-12 07:06:53.334120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.334294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.334504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.334529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.334709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.334918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.334963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.335179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.335374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.335399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.335547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.335749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.335793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.335997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.336222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.336269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.336425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.336603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.336634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.336820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.337039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.337083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.337296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.337490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.337515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.337691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.337901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.337944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.338117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.338330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.338373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.338550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.338746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.338776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.338966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.339185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.339228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.339423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.339590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.339614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.339820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.340008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.340052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.340256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.340425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.340450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.340602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.340770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.340817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.341000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.341222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.341269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.341426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.341583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.341607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.341809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.342008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.342050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.342232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.342440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.342482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.342662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.342859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.342902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.343073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.343260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.343302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.343491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.343666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.343691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.343911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.344132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.344180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.344356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.344579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.344604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.344804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.345013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.345059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.345232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.345455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.345496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.345652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.345888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.345932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.346110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.346355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.346398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.346556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.346758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.346800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.346978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.347178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.347221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.347426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.347620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.347645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.347848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.348101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.348142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.348380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.348601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.348627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.348830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.349000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.349027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.349257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.349479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.349526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.349733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.349942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.349986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.350171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.350421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.350464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.350670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.350822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.350847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.351045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.351263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.351306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.351483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.351680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.351710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.351868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.352047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.352088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.352296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.352549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.352591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.352802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.353021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.353066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.353270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.353515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.353557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.353782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.353997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.354039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.354223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.354470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.354512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.354728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.354902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.354944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.355121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.355318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.355360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.355541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.355690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.355720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.355894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.356147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.356189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.356373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.356569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.356593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.356824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.357009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.357053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.357286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.357457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.357483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.357637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.357819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.357861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.358047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.358267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.358314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-05-12 07:06:53.358499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-05-12 07:06:53.358720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.358746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.358948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.359173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.359216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.359392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.359562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.359587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.359763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.359978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.360020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.360226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.360391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.360416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.360625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.360837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.360882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.361065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.361280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.361323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.361477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.361629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.361655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.361870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.362065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.362107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.362289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.362485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.362510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.362665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.362875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.362917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.363145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.363357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.363399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.363578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.363778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.363821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.364023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.364279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.364322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.364501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.364678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.364709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.364924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.365175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.365217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.365448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.365666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.365691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.365866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.366034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.366062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.366258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.366425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.366451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.366637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.366874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.366918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.367123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.367343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.367385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.367564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.367762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.367791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.368001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.368255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.368296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.368458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.368639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.368664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.368846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.369037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.369080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.369287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.369454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.369480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.369661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.369881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.369926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.370105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.370352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.370396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.370598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.370797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.370841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.371017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.371206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.371249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.371500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.371705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.371731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.371877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.372055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.372084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.372273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.372499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.372525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.372704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.372947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.372975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.373188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.373398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.373449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.373624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.373787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.373832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.374037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.374230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.374272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.374504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.374749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.374775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.374951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.375140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.375184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.375387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.375579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.375604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.375785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.375980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.376023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.376265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.376592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.376642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.376822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.377021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.377063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.377242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.377502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.377550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.377746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.377983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.378011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.378197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.378421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.378470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.378623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.378848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.378892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.379109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.379480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.379531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.379718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.379925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.379973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.380149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.380369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.380411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.380568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.380784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.380832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.381020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.381221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.381263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-05-12 07:06:53.381417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.381620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-05-12 07:06:53.381645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.381834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.382018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.382061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.382238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.382401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.382427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.382611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.382783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.382827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.383061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.383302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.383357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.383537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.383720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.383746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.383922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.384131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.384157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.384410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.384612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.384639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.384847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.385126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.385169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.385349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.385515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.385540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.385707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.385905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.385948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.386119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.386342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.386385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.386531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.386678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.386711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.386916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.387192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.387245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.387445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.387631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.387657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.387846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.388072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.388114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.388334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.388562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.388587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.388785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.389004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.389051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.389281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.389479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.389521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.389703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.389880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.389923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.390154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.390368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.390417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.390567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.390742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.390771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.391001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.391225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.391268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.391439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.391627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.391652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.391828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.392053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.392097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.392353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.392554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.392578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.392780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.393027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.393070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.393272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.393489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.393531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.393715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.393959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.394000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.394197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.394412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.394454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.394617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.394826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.394869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.395103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.395296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.395324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.395516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.395700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.395726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.395943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.396135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.396177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.396406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.396595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.396620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.396792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.397020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.397064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.397270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.397468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.397511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.397663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.397877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.397920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.398119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.398352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.398406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.398578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.398779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.398822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.399029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.399221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.399264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.399443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.399622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.399647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.399829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.400048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.400091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.400298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.400490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.400515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.400678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.400860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.400903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.401119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.401294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.401336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.401522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.401679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.401712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.401944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.402150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.402176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.402361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.402541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.402568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.402765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.402987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.403033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.403218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.403387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.403413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.403566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.403765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.403808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.404013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.404201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.404244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.404423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.404612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.404637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.404845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.405069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.405114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.405328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.405521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.405546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.405753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.405959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.406001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-05-12 07:06:53.406227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.406443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-05-12 07:06:53.406484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.406636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.406789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.406819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.407023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.407276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.407319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.407467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.407640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.407664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.407851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.408076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.408118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.408287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.408480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.408505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.408685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.408912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.408938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.409164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.409380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.409422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.409626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.409797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.409823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.410021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.410249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.410293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.410472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.410630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.410656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.410879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.411143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.411193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.411374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.411568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.411593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.411790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.411977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.412020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.412313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.412485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.412511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.412660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.412892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.412936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.413129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.413289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.413314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.413476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.413652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.413678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.413917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.414176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.414230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.414416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.414585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.414610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.414787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.414976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.415019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.415228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.415556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.415607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.415804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.416022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.416065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.416309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.416511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.416536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.416725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.416933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.416976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.417189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.417378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.417421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.417625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.417803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.417829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.418001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.418297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.418339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.418527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.418721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.418746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.418959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.419179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.419226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.419408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.419575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.419601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.419783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.420001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.420043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.420249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.420412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.420437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.420608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.420819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.420862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.421048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.421269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.421312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.421489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.421673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.421707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.421904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.422125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.422172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.422371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.422567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.422592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.422791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.423015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.423060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.423278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.423474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.423516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.423663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.423872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.423917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.424169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.424495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.424542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.424745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.424975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.425017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.425228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.425425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.425450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.425628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.425835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.425878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.426057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.426314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.426356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.426535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.426722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.426747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.426958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.427212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.427258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.427446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.427614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.427640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.427799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.427997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.428025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.428196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.428352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.428377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.428537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.428756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.428784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.429002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.429217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-05-12 07:06:53.429244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-05-12 07:06:53.429452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.429646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.429672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.429884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.430144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.430186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.430357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.430551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.430577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.430762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.430953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.430995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.431191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.431444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.431486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.431644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.431836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.431880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.432091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.432298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.432340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.432497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.432676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.432714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.432886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.433106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.433148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.433323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.433516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.433541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.433718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.433924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.433971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.434197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.434391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.434434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.434593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.434789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.434834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.435008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.435223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.435251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.435420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.435573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.435600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.435775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.435962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.435991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.436157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.436339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.436364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.436544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.436686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.436716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.436928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.437175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.437217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.437421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.437620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.437645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.437829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.438033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.438075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.438246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.438468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.438496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.438681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.438839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.438866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.439100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.439320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.439365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.439574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.439752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.439778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.439949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.440175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.440204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.440397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.440568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.440594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.440761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.441015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.441057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.441257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.441451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.441476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.441636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.441808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.441850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.442056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.442312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.442355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.442547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.442746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.442788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.442992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.443211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.443254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.443435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.443611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.443637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.443794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.444008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.444034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.444234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.444425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.444450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.444646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.444913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.444956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.445164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.445417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.445459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.445639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.445815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.445857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.446071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.446278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.446319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.446469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.446627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.446652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.446855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.447104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.447147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.447353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.447521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.447547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.447732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.447949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.447976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.448172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.448390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.448432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.448614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.448811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.448853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.449061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.449369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.449415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.449599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.449764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.449807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.449990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.450181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.450223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.450395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.450588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.450615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.450814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.451060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.451103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.451372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.451574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.451601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.451788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.452009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.452037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.452235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.452427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.452452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.452613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.452807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.452851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.453028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.453218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.453261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-05-12 07:06:53.453464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.453615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-05-12 07:06:53.453641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.453853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.454038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.454081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.454307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.454473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.454498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.454678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.454894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.454937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.455146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.455361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.455403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.455549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.455701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.455726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.455931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.456148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.456192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.456373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.456521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.456547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.456742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.456977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.457019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.457287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.457504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.457529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.457730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.457961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.458004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.458201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.458433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.458458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.458630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.458829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.458858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.459119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.459340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.459382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.459563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.459755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.459783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.459988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.460236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.460277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.460486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.460713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.460739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.460912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.461156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.461184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.461399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.461565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.461591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.461823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.462076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.462119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.462361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.462564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.462591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.462753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.462956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.463000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-05-12 07:06:53.463215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.463463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-05-12 07:06:53.463506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.651 [2024-05-12 07:06:53.463654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.463841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.463871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.464080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.464492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.464533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.464723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.464930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.464974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.465157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.465387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.465431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.465701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.465906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.465948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.466151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.466342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.466385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.466548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.466732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.466762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.466959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.467153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.467195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.467428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.467621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.467646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.467856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.468146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.468188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.468398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.468627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.468652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.468874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.469103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.469146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.469327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.469499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.469524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.469708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.469860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.469885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.470121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.470315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.470358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.470534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.470688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.470723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.470872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.471076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.471118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.471357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.471550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.471576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.471741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.471991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.472034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.472220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.472435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.472478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.472634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.472819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.472874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.473081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.473309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.473352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.473529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.473684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.473717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.473900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.474121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.474166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.474400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.474594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.474620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.474807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.475003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.475031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.475226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.475438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.475481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.475661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.475841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.475885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.652 qpair failed and we were unable to recover it. 00:26:46.652 [2024-05-12 07:06:53.476068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.652 [2024-05-12 07:06:53.476256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.476299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.476506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.476711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.476737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.476910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.477170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.477216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.477416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.477606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.477631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.477822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.478035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.478061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.478246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.478467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.478509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.478661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.478874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.478918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.479148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.479337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.479379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.479556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.479750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.479779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.479966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.480196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.480241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.480425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.480623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.480649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.480823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.481005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.481048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.481273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.481468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.481498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.481706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.481864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.481890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.482124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.482364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.482397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.482617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.482789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.482833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.483012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.483265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.483293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.483486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.483687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.483720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.483880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.484062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.484104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.484262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.484443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.484485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.484658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.484842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.484867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.485048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.485269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.485317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.485552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.485759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.485806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.486002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.486195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.486238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.486436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.486636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.486662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.486879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.487076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.487120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.487323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.487495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.487520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.487705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.487861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.487886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.488061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.488276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.488322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.653 [2024-05-12 07:06:53.488494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.488671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.653 [2024-05-12 07:06:53.488703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.653 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.488850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.489047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.489089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.489301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.489550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.489593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.489782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.489972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.489998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.490218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.490404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.490446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.490719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.490874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.490901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.491134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.491380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.491424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.491603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.491759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.491785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.491994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.492187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.492229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.492407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.492602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.492628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.492827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.493031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.493073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.493269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.493459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.493485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.493635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.493840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.493884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.494084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.494371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.494413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.494605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.494788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.494815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.495050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.495246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.495288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.495463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.495640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.495666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.495876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.496108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.496134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.496332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.496606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.496631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.496835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.497029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.497072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.497253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.498332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.498365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.498584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.498809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.498862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.499093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.499288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.499331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.499492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.499648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.499673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.499886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.500152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.500194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.500391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.500565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.500591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.500788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.501028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.501070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.501249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.501512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.501537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.501717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.501883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.501928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.502133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.502356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.654 [2024-05-12 07:06:53.502399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.654 qpair failed and we were unable to recover it. 00:26:46.654 [2024-05-12 07:06:53.502571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.502773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.502818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.503021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.503213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.503256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.503442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.503619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.503644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.503860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.504086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.504129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.504366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.504552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.504577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.504733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.504909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.504937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.505165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.505346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.505389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.505566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.505792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.505837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.506012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.506198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.506240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.506392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.506570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.506595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.506798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.506993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.507036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.507275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.507493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.507518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.507703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.507917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.507963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.508231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.508475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.508517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.508680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.508884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.508910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.509118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.509337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.509379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.509579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.509767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.509793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.510003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.510251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.510294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.510501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.510686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.510720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.510883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.511062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.511105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.511317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.511561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.511604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.511754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.511952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.511995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.512169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.512386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.512429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.512571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.512772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.512802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.513034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.513223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.513266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.513455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.513623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.513650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.513847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.514048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.514089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.514298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.514496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.514522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.514708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.514886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.655 [2024-05-12 07:06:53.514928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.655 qpair failed and we were unable to recover it. 00:26:46.655 [2024-05-12 07:06:53.515132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.515379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.515421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.515576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.515776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.515819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.516022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.516268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.516310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.516488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.516665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.516690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.516930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.517118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.517162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.517376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.517548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.517575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.517805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.517973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.518000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.518173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.518368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.518393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.518551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.518768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.518811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.519022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.519246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.519290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.519445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.519629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.519655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.519852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.520038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.520082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.520286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.520501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.520526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.520726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.520936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.520980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.521161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.521389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.521433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.521618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.521780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.521825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.522034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.522225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.522269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.522448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.522627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.522652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.522841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.523089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.523133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.523360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.523552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.523577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.523752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.523978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.524006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.524202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.524434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.524459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.524620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.524799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.524843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.656 qpair failed and we were unable to recover it. 00:26:46.656 [2024-05-12 07:06:53.525048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.656 [2024-05-12 07:06:53.525274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.525322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.525523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.525681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.525716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.525903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.526093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.526135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.526306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.526501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.526528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.526716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.526955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.527000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.527182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.527412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.527455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.527634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.527846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.527889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.528072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.528283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.528328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.528506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.528710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.528736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.528944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.529175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.529218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.529427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.529620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.529645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.529798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.529975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.530003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.530216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.530529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.530575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.530731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.531000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.531027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.531242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.531493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.531535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.531750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.531929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.531957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.532234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.532486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.532528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.532706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.532890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.532916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.533120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.533311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.533339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.533531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.533724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.533750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.533950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.534201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.534243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.534445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.534632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.534657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.534850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.535117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.535146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.535338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.535500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.535525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.535729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.535929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.535972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.536177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.536368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.536410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.536613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.536802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.536845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.537092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.537340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.537381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.537643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.537800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.537826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.657 [2024-05-12 07:06:53.538092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.538372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.657 [2024-05-12 07:06:53.538415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.657 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.538593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.538771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.538797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.539005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.539220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.539263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.539495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.539702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.539729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.539919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.540112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.540155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.540360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.540579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.540604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.540795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.540992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.541035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.541240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.541442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.541468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.541677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.541854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.541898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.542108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.542329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.542373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.542544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.542728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.542755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.542974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.543208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.543251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.543404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.543610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.543635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.543813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.544033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.544079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.544294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.544513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.544538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.544719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.544924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.544952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.545149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.545334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.545376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.545560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.545756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.545784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.546007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.546227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.546273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.546458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.546634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.546659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.546850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.547044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.547087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.547295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.547527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.547553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.547714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.547888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.547931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.548111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.548310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.548357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.548531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.548721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.548748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.548933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.549151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.549179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.549417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.549586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.549612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.549818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.550050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.550092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.550261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.550460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.550484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.550691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.550897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.550940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.658 qpair failed and we were unable to recover it. 00:26:46.658 [2024-05-12 07:06:53.551170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.658 [2024-05-12 07:06:53.551378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.551420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.551600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.551802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.551846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.552037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.552266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.552308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.552539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.552749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.552783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.552992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.553218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.553261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.553524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.553731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.553756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.553958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.554150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.554192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.554366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.554560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.554585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.554784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.554968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.555012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.555187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.555421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.555463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.555642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.555831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.555875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.556065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.556276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.556318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.556525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.556720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.556746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.556924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.557144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.557190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.557404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.557627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.557652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.557837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.558030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.558074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.558256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.558483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.558509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.558719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.558892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.558935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.559170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.559357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.559399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.559547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.559747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.559775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.560001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.560218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.560260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.560444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.560597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.560622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.560795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.561022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.561063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.561267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.561462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.561488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.561652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.561832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.561861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.562115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.562299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.562342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.562499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.562685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.562716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.562928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.563116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.563159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.563363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.563590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.563616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.659 qpair failed and we were unable to recover it. 00:26:46.659 [2024-05-12 07:06:53.563818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.564042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.659 [2024-05-12 07:06:53.564084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.564248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.564476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.564519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.564718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.564902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.564927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.565132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.565386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.565430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.565583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.565755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.565799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.566011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.566228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.566270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.566454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.566653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.566679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.566880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.567135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.567176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.567351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.567549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.567574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.567749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.567945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.567987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.568194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.568411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.568453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.568627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.568823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.568867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.569077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.569299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.569341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.569515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.569700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.569728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.569908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.570121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.570165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.570372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.570564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.570589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.570786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.571022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.571066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.571246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.571499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.571541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.571743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.571943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.571984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.572162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.572412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.572454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.572664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.572895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.572940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.573118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.573374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.573416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.573620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.573804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.573833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.574023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.574239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.574282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.574486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.574665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.574690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.574881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.575157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.575201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.575438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.575636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.575660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.660 qpair failed and we were unable to recover it. 00:26:46.660 [2024-05-12 07:06:53.575818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.576018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.660 [2024-05-12 07:06:53.576060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.576269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.576487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.576530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.576718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.576923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.576948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.577148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.577368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.577410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.577595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.577771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.577797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.578023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.578206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.578249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.578456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.578622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.578647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.578821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.579050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.579092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.579308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.579503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.579528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.579680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.579859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.579885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.580056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.580246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.580289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.580443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.580626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.580651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.580857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.581086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.581130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.581312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.581484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.581511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.581673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.581829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.581856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.582073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.582288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.582332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.582484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.582686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.582720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.582895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.583110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.583154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.583388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.583557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.583583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.583814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.584039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.584084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.584312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.584524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.584550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.584739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.584931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.584978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.585150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.585412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.585455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.585613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.585780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.585810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.586024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.586273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.586314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.586467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.586645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-05-12 07:06:53.586671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-05-12 07:06:53.586880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.587091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.587132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.587332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.587521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.587546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.587774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.587995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.588038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.588251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.588472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.588515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.588666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.588885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.588930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.589165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.589393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.589419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.589625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.589848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.589893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.590093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.590326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.590352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.590528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.590722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.590749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.590933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.591184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.591226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.591471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.591662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.591688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.591876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.592046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.592087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.592320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.592515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.592556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.592763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.592959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.593001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.593233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.593460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.593486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.593664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.593905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.593934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.594158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.594415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.594457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.594604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.594838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.594866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.595111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.595365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.595407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.595615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.595842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.595889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.596091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3147212 Killed "${NVMF_APP[@]}" "$@" 00:26:46.662 [2024-05-12 07:06:53.596379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.596420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 07:06:53 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:26:46.662 [2024-05-12 07:06:53.596607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 07:06:53 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:46.662 [2024-05-12 07:06:53.596807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 07:06:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:46.662 [2024-05-12 07:06:53.596855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 07:06:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:46.662 07:06:53 -- common/autotest_common.sh@10 -- # set +x 00:26:46.662 [2024-05-12 07:06:53.597089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.597317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.597359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.597537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.597687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.597719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.597928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.598125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.598167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.598371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.598632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.598656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.598911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.599124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.599165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.599375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.599585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-05-12 07:06:53.599610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-05-12 07:06:53.599828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.600059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.600086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.600322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.600515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 07:06:53 -- nvmf/common.sh@469 -- # nvmfpid=3147796 00:26:46.663 [2024-05-12 07:06:53.600540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 07:06:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:46.663 07:06:53 -- nvmf/common.sh@470 -- # waitforlisten 3147796 00:26:46.663 [2024-05-12 07:06:53.600682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 07:06:53 -- common/autotest_common.sh@819 -- # '[' -z 3147796 ']' 00:26:46.663 [2024-05-12 07:06:53.600884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.600910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 07:06:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.663 [2024-05-12 07:06:53.601087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 07:06:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:46.663 07:06:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.663 [2024-05-12 07:06:53.601299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.601342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 07:06:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:46.663 07:06:53 -- common/autotest_common.sh@10 -- # set +x 00:26:46.663 [2024-05-12 07:06:53.601583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.601811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.601838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.602012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.602227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.602255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.602524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.602690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.602724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.602866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.603050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.603092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.603365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.603559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.603584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.603785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.604026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.604070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.604274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.604467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.604510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.604715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.604908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.604951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.605161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.605358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.605402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.605606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.605788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.605816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.606022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.606269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.606316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.606500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.606694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.606726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.606898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.607137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.607184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.607353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.607555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.607580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.607760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.607966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.608011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.608191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.608417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.608443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.608622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.608820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.608863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.609145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.609377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.609425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.609632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.609808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.609853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.610024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.610238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.610280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.610581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.610769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.610798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.611022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.611229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.611275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.611473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.611643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.611668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-05-12 07:06:53.611853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-05-12 07:06:53.612049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.612093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.612257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.612423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.612449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.612626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.612825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.612870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.613055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.613267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.613304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.613474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.613629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.613655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.613882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.614078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.614120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.614362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.614553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.614579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.614780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.615013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.615055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.615255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.615469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.615512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.615666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.615878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.615922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.616094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.616313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.616356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.616510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.616684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.616716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.616922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.617146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.617174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.617369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.617561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.617587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.617795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.618004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.618048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.618255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.618477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.618502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.618676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.618902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.618945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.619145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.619342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.619385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.619534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.619743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.619769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.619938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.620170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.620196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.620400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.620599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.620624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.620859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.621053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.621095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.621267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.621527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.621570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.621720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.621890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.621933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.622139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.622394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.622437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.622617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.622816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.622861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.623063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.623311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.623354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.623509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.623707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.623733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.623916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.624107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.624149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-05-12 07:06:53.624385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.624604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-05-12 07:06:53.624628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.624812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.624995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.625037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.625209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.625423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.625468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.625621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.625831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.625875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.626054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.626278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.626321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.626472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.626625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.626655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.626843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.627041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.627084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.627288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.627508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.627534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.627680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.627899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.627942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.628127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.628351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.628394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.628611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.628813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.628859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.629038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.629275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.629319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.629473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.629619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.629644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.629853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.630046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.630090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.630323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.630512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.630538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.630759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.630948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.630997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.631204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.631437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.631481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.631648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.631812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.631837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.632048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.632305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.632347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.632527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.632713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.632741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.632925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.633144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.633191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.633391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.633605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.633630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.633863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.634084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.634129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.634325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.634520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.634547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.634780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.634977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.635019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-05-12 07:06:53.635193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-05-12 07:06:53.635387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.635418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.635566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.635765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.635809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.636019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.636221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.636263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.636469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.636619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.636644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.636828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.637027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.637069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.637276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.637445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.637470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.637648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.637836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.637880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.638088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.638317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.638343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.638548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.638797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.638840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.639017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.639267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.639309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.639461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.639660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.639689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.639874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.640068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.640109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.640336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.640536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.640564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.640736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.640925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.640969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.641180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.641404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.641452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.641644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.641846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.641890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.642085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.642297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.642339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.642541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.642727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.642753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.642926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.643150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.643193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.643372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.643578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.643603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.643798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.644020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.644063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.644274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.644440] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:46.666 [2024-05-12 07:06:53.644486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.644531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 [2024-05-12 07:06:53.644533] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.644721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.644925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.644967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.645207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.645488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.645530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.645723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.645953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.645982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.646172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.646389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.646432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.646611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.646809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.646854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.647066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.647296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.647324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.647489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.647679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-05-12 07:06:53.647714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-05-12 07:06:53.647899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.648155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.648198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.648412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.648610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.648636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.648817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.649007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.649050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.649228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.649479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.649527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.649744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.649926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.649970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.650175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.650439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.650487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.650693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.650908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.650951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.651155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.651420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.651462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.651639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.651798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.651825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.651997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.652262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.652294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.652550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.652796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.652841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.653027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.653282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.653333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.653542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.653753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.653782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.653974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.654253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.654302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.654491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.654689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.654723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.654912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.655163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.655211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.655458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.655632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.655658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.655848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.656040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.656068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.656313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.656514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.656539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.656711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.656886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.656929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.657107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.657302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.657343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.657532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.657713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.657739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.657924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.658142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.658187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.658417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.658610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.658637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.658839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.659055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.659099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.659285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.659506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.659551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.659747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.659945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.659989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.660163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.660397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.660424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.660633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.660809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.660853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-05-12 07:06:53.661028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.661244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-05-12 07:06:53.661286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.661453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.661656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.661681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.661870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.662133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.662179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.662373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.662569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.662593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.662821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.663068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.663110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.663319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.663536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.663562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.663706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.663881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.663923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.664157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.664383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.664427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.664605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.664836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.664881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.665082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.665298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.665344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.665490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.665646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.665671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.665893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.666116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.666160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.666393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.666559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.666585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.666788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.667021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.667048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.667279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.667473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.667498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.667688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.667872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.667897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.668079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.668328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.668370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.668579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.668785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.668811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.668992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.669242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.669284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.669482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.669681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.669713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.669922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.670133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.670175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.670374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.670568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.670593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.670790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.671023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.671065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.671306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.671499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.671524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.671705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.671863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.671890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.672128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.672374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.672418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.672594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.672764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.672807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.673011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.673175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.673202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.673406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.673632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.673657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.673843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.674064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.674107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-05-12 07:06:53.674316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-05-12 07:06:53.674511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.674536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.674725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.674906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.674949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.675189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.675405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.675447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.675650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.675808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.675835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.676050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.676301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.676344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.676559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.676788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.676833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.677002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.677242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.677285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.677535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.677750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.677794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.677995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.678188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.678216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.678434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.678634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.678659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.678882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.679143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.679183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.679332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.679609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.679634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.669 [2024-05-12 07:06:53.679853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.680057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.680101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.680304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.680468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.680493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.680704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.680897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.680940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.681171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.681423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.681466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.681673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.681873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.681899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.682104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.682346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.682389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.682694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.682983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.683009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.683195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.683378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.683403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.683588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.683748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.683774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.683924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.684090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.684114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.684334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.684513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.684538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.684727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.684881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.684908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.685070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.685274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.685300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.685484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.685655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.685680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.685872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.686075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.686100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-05-12 07:06:53.686288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.686468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-05-12 07:06:53.686493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.686719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.686861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.686886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.687045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.687197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.687224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.687403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.687585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.687617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.687799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.687996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.688021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.688226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.688433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.688458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.688636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.688826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.688852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.689034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.689211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.689238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.689417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.689595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.689622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.689834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.690010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.690034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.690234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.690412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.690437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.690619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.690782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.690808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.691009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.691189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.691214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.691392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.691599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.691623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.691808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.691989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.692014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.692221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.692383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.692413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.692588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.692762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.692788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.692967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.693148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.693173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.693349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.693529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.693556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.693780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.693928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.693953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.694151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.694328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.694353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.694531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.694736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.694762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.694970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.695145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.695170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.695353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.695559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.695584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.695762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.695912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.695936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.696119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.696298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.696330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.696535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.696724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.696750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-05-12 07:06:53.696942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-05-12 07:06:53.697126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.697150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.697361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.697562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.697586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.697825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.697985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.698010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.698200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.698393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.698417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.698594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.698770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.698796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.698977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.699153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.699178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.699365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.699554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.699593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.699776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.699928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.699953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.700199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.700396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.700425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.700612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.700757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.700784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.700964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.701218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.701243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.701397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.701559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.701584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.701774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.701948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.701973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.702151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.702354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.702379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.702570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.702788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.702813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.702998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.703149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.703191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.703357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.703504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.703529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.703736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.703914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.703940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.704114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.704318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.704361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.704589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.704762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.704788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.704969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.705220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.705260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.705471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.705645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.705671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.705833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.705979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.706018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.706244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.706453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.706478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.706621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.706801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.706827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.707005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.707204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.707229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.707480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.707666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.707691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.707881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.708055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.708081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.708229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.708427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.708467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.708661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.708844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-05-12 07:06:53.708871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-05-12 07:06:53.709054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.709257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.709282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.709460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.709657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.709683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.709876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.710064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.710089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.710257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.710453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.710477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.710713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.710886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.710912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.711117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.711297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.711337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.711514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.711875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.711901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.712079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.712282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.712307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.712512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.712727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.712753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.712941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.713136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.713160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.713306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.713519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.713545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.713734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.713927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.713953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.714149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.714349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.714373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.714590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.714794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.714819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.714964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.715138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.715163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.715265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:46.672 [2024-05-12 07:06:53.715341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.715523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.715548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.715735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.715913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.715939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.716151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.716326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.716352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.716556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.716713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.716744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.716906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.717080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.717105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.717310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.717519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.717544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.717719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.717903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.717928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.718191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.718373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.718398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.718579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.718758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.718783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.718937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.719113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.719137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.719316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.719466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.719491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.719706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.719852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.719877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.720053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.720263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.720288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.720496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.720677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-05-12 07:06:53.720716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-05-12 07:06:53.720898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.721082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.721107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.721287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.721469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.721496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.721681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.721895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.721921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.722130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.722335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.722361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.722537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.722740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.722766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.722944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.723124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.723149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.723354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.723645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.723670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.723897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.724046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.724072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.724228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.724433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.724458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.724631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.724786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.724812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.724969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.725143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.725168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.725374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.725579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.725605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.725794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.725943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.725968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.726173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.726359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.726385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.726564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.726716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.726743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.726950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.727132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.727157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.727334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.727517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.727557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.727776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.727972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.727999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.728299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.728511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.728536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.728700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.728909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.728935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.729123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.729367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.729393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.729612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.729817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.729843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.730004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.730189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.730214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.730415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.730591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.730617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.730797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.730950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.730975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.731180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.731372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.731398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.731584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.731760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.731787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.731970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.732169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.732194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.732365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.732540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.732565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-05-12 07:06:53.732763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.732905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-05-12 07:06:53.732930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.733144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.733351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.733376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.733582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.733741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.733767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.733972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.734197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.734222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.734415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.734658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.734704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.734882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.735063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.735088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.735294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.735500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.735525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.735684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.735916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.735942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.736125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.736277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.736304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.736505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.736658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.736684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.736875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.737027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.737053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.737261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.737434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.737460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.737617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.737777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.737804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.737988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.738136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.738162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.738337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.738540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.738565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.738737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.738918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.738944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.739102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.739303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.739329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.739507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.739683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.739716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.739899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.740105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.740130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.740307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.740454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.740479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.740655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.740847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.740873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.741029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.741211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.741236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.741386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.741591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.741616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.741793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.742000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.742026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-05-12 07:06:53.742204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.742380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-05-12 07:06:53.742405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.742591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.742793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.742819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.742973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.743155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.743180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.743356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.743535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.743561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.743768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.743977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.744002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.744185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.744363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.744388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.744573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.744723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.744749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.744937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.745120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.745145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.745321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.745523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.745548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.745750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.745927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.745952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.746112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.746324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.746349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.746554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.746734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.746761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.746912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.747063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.747089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.747274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.747418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.747446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.747604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.747749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.747775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.747939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.748169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.748197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.748415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.748626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.748651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.748906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.749100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.749128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.749277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.749456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.749481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.749679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.749902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.749928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.750081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.750286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.750312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.750487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.750635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.750660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.750846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.751023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.751049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.751252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.751424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.751449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.751625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.751803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.751830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.752014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.752189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.752214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.752365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.752569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.752595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.752778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.752935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.752962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.753183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.753392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.753417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.753563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.753741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.753767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-05-12 07:06:53.753921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-05-12 07:06:53.754108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.754133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.754307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.754488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.754513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.754660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.754867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.754893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.755068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.755248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.755273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.755447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.755623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.755648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.755828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.756031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.756057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.756243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.756398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.756423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.756601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.756790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.756820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.756999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.757174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.757199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.757383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.757588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.757613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.757789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.757963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.757989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.758142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.758324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.758349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.758499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.758705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.758730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.758934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.759110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.759135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.759285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.759425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.759450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.759660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.759872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.759898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.760090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.760302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.760327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.760507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.760682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.760715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.760925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.761101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.761126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.761301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.761446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.761471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.761638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.761848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.761874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.762049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.762252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.762277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.762456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.762664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.762689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.762850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.763028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.763053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.763262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.763441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.763467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.763720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.763872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.763897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.764080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.764284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.764309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.764482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.764683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-05-12 07:06:53.764713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-05-12 07:06:53.764899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.765069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.765094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.765243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.765444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.765469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.765650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.765858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.765884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.766062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.766242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.766267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.766453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.766623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.766649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.766829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.767013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.767038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.767215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.767425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.767450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.767656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.767836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.767862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.768062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.768205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.768230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.768413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.768594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.768620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.768807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.768961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.768986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.769193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.769338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.769363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.769525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.769734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.769760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.769933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.770080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.770105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.770288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.770446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.770472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.770650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.770860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.770885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.771048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.771232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.771258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.771439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.771584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.771610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.771798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.771943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.771968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.772149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.772332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-05-12 07:06:53.772358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-05-12 07:06:53.772535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.772755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.772782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.772970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.773156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.773181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.773358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.773530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.773555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.773710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.773966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.773990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.774176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.774325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.774351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.774506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.774682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.774713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.774871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.775045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.775070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.775253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.775457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.775482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.775687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.775847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.775872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.776071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.776217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.776242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.776422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.776597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.776626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.776833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.777009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.777035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.777239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.777389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.777414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.777592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.777788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.777813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.777970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.778151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.778177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.778362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.778520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.778547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.778705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.778885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.778909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.779064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.779212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.779237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.779391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.779546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.779571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.779854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.780039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.780063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.954 qpair failed and we were unable to recover it. 00:26:46.954 [2024-05-12 07:06:53.780216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.780391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.954 [2024-05-12 07:06:53.780421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.780624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.780838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.780863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.781017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.781165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.781190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.781371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.781575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.781600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.781780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.781982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.782006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.782181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.782361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.782386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.782564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.782716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.782741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.782885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.783062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.783088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.783241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.783425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.783450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.783639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.783824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.783850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.784006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.784208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.784233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.784449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.784599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.784626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.784825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.784977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.785002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.785154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.785323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.785348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.785496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.785640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.785666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.785833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.785981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.786006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.786160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.786307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.786331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.786545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.786750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.786776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.786929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.787105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.787129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.787335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.787508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.787533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.787786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.787987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.788012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.788190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.788367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.788391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.788562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.788770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.788813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.788990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.789236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.789262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.789451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.789632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.789657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.789805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.789981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.790007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.790212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.790420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.790445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.790646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.790825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.790851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.790994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.791168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.791209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.955 [2024-05-12 07:06:53.791398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.791571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.955 [2024-05-12 07:06:53.791597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.955 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.791754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.791903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.791927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.792104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.792285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.792309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.792508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.792659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.792684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.792863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.793012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.793037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.793239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.793393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.793418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.793597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.793779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.793804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.793973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.794124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.794150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.794303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.794463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.794489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.794683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.794917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.794942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.795122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.795304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.795329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.795501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.795643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.795668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.795858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.796008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.796034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.796178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.796357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.796384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.796586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.796734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.796762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.796956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.797102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.797126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.797318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.797541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.797565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.797735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.797890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.797915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.798117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.798255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.798280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.798430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.798609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.798634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.798814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.798988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.799013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.799194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.799339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.799364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.799544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.799691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.799733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.799950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.800124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.800149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.800328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.800507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.800532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.800675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.800858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.800883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.801044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.801221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.801246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.801459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.801611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.801635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.801831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.801974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.801999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.802215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.802353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.802378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.956 qpair failed and we were unable to recover it. 00:26:46.956 [2024-05-12 07:06:53.802520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.802662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.956 [2024-05-12 07:06:53.802688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.802850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.802996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.803021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.803196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.803335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.803360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.803557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.803774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.803799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.803949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.804149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.804174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.804325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.804509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.804534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.804716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.804866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.804891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.805060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.805265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.805290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.805492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.805672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.805704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.805880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.806029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.806054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.806214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.806364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.806389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.806541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.806749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.806775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.806930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.807133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.807158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.807372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.807559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.807585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.807742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.807914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.807939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.808113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.808286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.808310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.808485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.808689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.808720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.808932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.809087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.809112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.809297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.809450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.809475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.809673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.809832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.809858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.810011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.810160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.810185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.810390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.810567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.810592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.810747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.810896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.810921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.811108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.811282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.811307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.811461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.811610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.811636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.811786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.811929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.811954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.812218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.812357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.812381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.812557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.812732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.812758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.812910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.813086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.813113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.813291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.813473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.813498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.957 qpair failed and we were unable to recover it. 00:26:46.957 [2024-05-12 07:06:53.813674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.957 [2024-05-12 07:06:53.813860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.813886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.814074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.814220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.814245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.814422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.814599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.814624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.814831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.815012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.815038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.815246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.815428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.815453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.815603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.815809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.815835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.816041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.816210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.816235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.816413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.816587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.816611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.816820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.817018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.817043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.817221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.817396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.817421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.817573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.817799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.817825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.817973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.818124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.818150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.818327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.818474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.818499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.818673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.818853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.818883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.819032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.819181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.819208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.819451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.819647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.819671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.819879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.820051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.820077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.820247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.820418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.820443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.820626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.820785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.820812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.820959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.821130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.821155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.821371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.821516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.821541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.821723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.821925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.821950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.958 qpair failed and we were unable to recover it. 00:26:46.958 [2024-05-12 07:06:53.822127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.958 [2024-05-12 07:06:53.822305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.822330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.822532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.822735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.822760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.822948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.823126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.823150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.823346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.823518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.823543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.823718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.823869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.823894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.824076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.824273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.824297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.824470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.824673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.824703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.824881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.825091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.825115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.825289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.825443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.825467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.825613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.825759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.825785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.825955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.826096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.826121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.826294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.826472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.826497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.826653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.826860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.826886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.827035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.827232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.827256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.827444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.827595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.827619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.827777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.827922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.827947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.828120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.828270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.828295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.828463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.828639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.828664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.828853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.829057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.829082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.829228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.829403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.829428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.829610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.829790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.829815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.829963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.830116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.830140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.830284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.830431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.830456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.830634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.830642] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:46.959 [2024-05-12 07:06:53.830773] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.959 [2024-05-12 07:06:53.830781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.830794] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.959 [2024-05-12 07:06:53.830805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 [2024-05-12 07:06:53.830808] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.830862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:46.959 [2024-05-12 07:06:53.830957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.831002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:46.959 [2024-05-12 07:06:53.831039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:46.959 [2024-05-12 07:06:53.831116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.831139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.831113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:46.959 [2024-05-12 07:06:53.831296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.831444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.831469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.831628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.831800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.831825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.959 [2024-05-12 07:06:53.832019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.832190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.959 [2024-05-12 07:06:53.832215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.959 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.832392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.832543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.832567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.832724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.832883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.832907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.833058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.833236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.833260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.833408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.833567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.833592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.833769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.834053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.834077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.834223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.834504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.834529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.834676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.834843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.834867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.835038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.835190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.835215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.835385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.835525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.835549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.835727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.835884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.835909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.836070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.836224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.836249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.836420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.836573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.836598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.836756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.836904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.836929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.837209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.837369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.837394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.837629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.837811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.837837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.837989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.838170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.838195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.838520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.838736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.838761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.838905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.839045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.839069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.839221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.839400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.839425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.839606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.839764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.839789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.839971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.840158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.840182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.840360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.840509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.840534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.840722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.840894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.840923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.841079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.841225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.841250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.841398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.841543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.841567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.841755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.841910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.841934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.842079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.842249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.842274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.842422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.842567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.842592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.960 [2024-05-12 07:06:53.842757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.842902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.960 [2024-05-12 07:06:53.842928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.960 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.843108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.843312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.843336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.843478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.843654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.843680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.843946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.844122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.844147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.844300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.844497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.844525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.844706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.844860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.844885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.845031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.845185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.845210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.845386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.845553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.845578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.845762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.845906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.845931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.846082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.846230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.846255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.846459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.846609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.846634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.846810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.846970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.846994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.847160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.847321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.847346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.847522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.847671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.847700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.847867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.848007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.848032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.848219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.848366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.848391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.848560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.848751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.848776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.848930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.849107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.849131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.849311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.849455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.849480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.849640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.849790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.849816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.849970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.850146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.850170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.850348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.850517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.850542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.850693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.850867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.850892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.851035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.851227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.851252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.851418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.851621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.851645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.851815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.851970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.851997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.852281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.852418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.852443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.852591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.852776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.852802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.852975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.853161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.853188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.853381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.853537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.853562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.961 qpair failed and we were unable to recover it. 00:26:46.961 [2024-05-12 07:06:53.853758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.961 [2024-05-12 07:06:53.853951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.853976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.854229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.854408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.854434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.854578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.854766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.854791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.854946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.855130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.855155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.855298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.855478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.855503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.855673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.855874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.855899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.856084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.856225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.856249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.856416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.856624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.856649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.856807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.856957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.856984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.857140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.857283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.857308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.857483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.857635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.857660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.857823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.857974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.858002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.858172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.858327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.858351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.858524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.858699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.858724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.858887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.859030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.859055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.859227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.859375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.859400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.859546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.859708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.859735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.859889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.860043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.860067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.860219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.860368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.860393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.860546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.860686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.860717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.860876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.861024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.861049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.861210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.861388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.861413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.861562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.861714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.861739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.861913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.862052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.862076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.862227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.862378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.862403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.862601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.862786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.862819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.862964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.863140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.863164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.863339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.863544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.863569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.863731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.863903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.863927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.864084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.864232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.962 [2024-05-12 07:06:53.864257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.962 qpair failed and we were unable to recover it. 00:26:46.962 [2024-05-12 07:06:53.864402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.864552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.864577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.864740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.864895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.864920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.865142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.865326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.865351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.865528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.865713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.865738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.865885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.866041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.866066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.866214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.866497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.866521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.866673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.866905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.866930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.867100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.867254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.867279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.867473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.867659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.867683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.867852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.868005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.868030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.868208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.868386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.868410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.868585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.868744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.868770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.868950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.869102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.869127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.869291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.869443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.869468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.869631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.869785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.869810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.869980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.870240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.870264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.870549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.870775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.870800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.870976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.871124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.871149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.871349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.871538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.871563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.871764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.871953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.871978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.872260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.872433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.872458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.872610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.872782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.872807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.872986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.873139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.873164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.873328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.873475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.873500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.873676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.873861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.873886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.963 [2024-05-12 07:06:53.874034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.874190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.963 [2024-05-12 07:06:53.874214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.963 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.874364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.874518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.874543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.874728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.874886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.874911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.875086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.875228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.875254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.875424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.875615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.875639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.875823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.875977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.876002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.876170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.876351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.876376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.876544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.876690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.876722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.876900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.877079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.877104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.877276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.877449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.877474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.877644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.877825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.877850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.878041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.878227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.878254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.878434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.878709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.878734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.878889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.879052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.879078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.879237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.879391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.879415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.879713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.879868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.879894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.880042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.880186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.880211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.880413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.880581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.880606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.880787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.880929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.880954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.881116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.881272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.881296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.881448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.881631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.881655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.881824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.881987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.882016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.882168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.882335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.882359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.882563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.882720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.882746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.882905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.883080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.883105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.883258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.883437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.883462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.883612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.883885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.883911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.884098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.884377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.884402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.884561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.884707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.884733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.884887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.885085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.885111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.964 qpair failed and we were unable to recover it. 00:26:46.964 [2024-05-12 07:06:53.885274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.885426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.964 [2024-05-12 07:06:53.885452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.885601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.885792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.885818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.885973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.886120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.886145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.886331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.886505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.886529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.886712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.886869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.886893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.887045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.887218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.887242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.887388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.887568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.887593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.887744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.887899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.887924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.888100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.888252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.888277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.888482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.888631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.888656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.888818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.888984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.889009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.889184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.889328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.889353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.889514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.889691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.889721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.889890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.890030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.890054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.890229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.890376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.890402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.890581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.890752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.890778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.890950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.891120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.891144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.891297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.891470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.891495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.891654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.891807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.891832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.891984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.892159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.892187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.892358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.892524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.892549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.892717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.892863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.892888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.893057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.893197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.893221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.893426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.893572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.893597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.893795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.893945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.893972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.894173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.894324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.894349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.894502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.894650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.894675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.894853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.895004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.895030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.895183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.895492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.895517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.895679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.895843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.895869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.965 qpair failed and we were unable to recover it. 00:26:46.965 [2024-05-12 07:06:53.896166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.965 [2024-05-12 07:06:53.896336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.896361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.896516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.896663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.896688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.896841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.896985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.897010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.897189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.897344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.897370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.897529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.897676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.897707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.897893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.898045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.898070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.898222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.898369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.898396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.898578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.898763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.898789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.898970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.899120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.899144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.899308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.899482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.899506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.899684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.899874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.899899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.900062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.900238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.900263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.900456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.900626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.900655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.900838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.900996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.901021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.901178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.901359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.901384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.901565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.901735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.901760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.901956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.902233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.902258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.902413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.902570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.902594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.902755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.902905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.902931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.903089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.903363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.903388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.903543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.903749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.903774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.903931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.904084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.904109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.904306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.904459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.904484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.904651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.904833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.904858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.905008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.905162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.905187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.905336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.905489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.905514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.905668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.905886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.905912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.906092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.906268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.906293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.906441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.906596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.906621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.906798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.906948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.906973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.966 qpair failed and we were unable to recover it. 00:26:46.966 [2024-05-12 07:06:53.907126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.966 [2024-05-12 07:06:53.907306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.907331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.907482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.907654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.907679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.907861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.908040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.908065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.908312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.908492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.908517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.908680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.908860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.908885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.909077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.909256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.909280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.909430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.909588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.909614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.909769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.909913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.909937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.910101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.910257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.910282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.910450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.910606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.910631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.910789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.910939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.910964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.911117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.911262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.911286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.911466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.911613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.911637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.911834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.911996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.912021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.912191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.912348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.912373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.912523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.912710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.912736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.912929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.913085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.913110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.913285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.913430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.913454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.913617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.913810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.913836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.914114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.914271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.914295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.914443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.914624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.914649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.914841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.914992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.915017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.915180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.915330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.915355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.915511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.915667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.915692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.915854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.916022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.916048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.967 qpair failed and we were unable to recover it. 00:26:46.967 [2024-05-12 07:06:53.916194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.916366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.967 [2024-05-12 07:06:53.916391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.916568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.916720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.916747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.916928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.917078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.917103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.917277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.917430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.917454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.917614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.917773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.917799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.917986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.918164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.918188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.918365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.918556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.918581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.918734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.918902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.918927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.919131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.919310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.919342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.919506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.919655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.919679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.919867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.920016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.920041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.920209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.920386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.920411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.920581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.920758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.920784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.920977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.921160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.921185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.921358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.921507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.921532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.921708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.921913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.921938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.922091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.922238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.922264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.922408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.922559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.922585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.922739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.922912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.922940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.923135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.923286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.923310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.923483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.923648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.923672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.923861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.924012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.924036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.924227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.924391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.924416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.924592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.924741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.924766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.924914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.925077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.925102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.925265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.925430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.925455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.925636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.925788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.925813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.925967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.926120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.926145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.926300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.926464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.926490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.926671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.926863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.926888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.968 qpair failed and we were unable to recover it. 00:26:46.968 [2024-05-12 07:06:53.927089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.968 [2024-05-12 07:06:53.927241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.927266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.927446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.927594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.927619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.927761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.927901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.927927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.928194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.928387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.928413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.928679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.928867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.928892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.929088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.929227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.929252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.929424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.929583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.929607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.929784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.929960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.929984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.930136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.930311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.930335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.930489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.930669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.930694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.930852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.930999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.931024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.931177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.931325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.931350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.931500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.931646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.931671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.931864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.932052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.932077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.932228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.932368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.932394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.932571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.932717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.932743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.932891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.933080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.933105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.933253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.933420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.933445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.933599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.933777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.933804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.933978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.934160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.934185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.934366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.934522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.934547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.934700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.934853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.934877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.935054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.935198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.935223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.935379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.935551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.935576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.935740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.935925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.935950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.936100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.936262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.936287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.936482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.936661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.936685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.936840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.937014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.937039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.937246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.937421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.937456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.937598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.937762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.937788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.969 qpair failed and we were unable to recover it. 00:26:46.969 [2024-05-12 07:06:53.937940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.969 [2024-05-12 07:06:53.938131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.938156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.938339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.938491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.938517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.938700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.938897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.938922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.939110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.939248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.939273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.939442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.939599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.939624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.939777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.939922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.939947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.940126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.940281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.940306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.940479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.940648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.940674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.940876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.941070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.941099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.941294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.941452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.941484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.941664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.941819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.941846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.941999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.942155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.942181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.942335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.942516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.942551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.942762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.942911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.942937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.943115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.943296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.943321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.943491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.943682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.943715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.943882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.944060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.944086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.944237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.944391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.944417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.944575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.944760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.944786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.944965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.945144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.945177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.945356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.945536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.945561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.945725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.945898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.945923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.946103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.946246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.946271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.946423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.946610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.946635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.946781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.946962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.946987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.947194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.947355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.947382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.947587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.947754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.947780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.947961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.948125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.948150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.948298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.948447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.948473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.970 [2024-05-12 07:06:53.948642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.948809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.970 [2024-05-12 07:06:53.948834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.970 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.949002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.949161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.949188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.949339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.949479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.949504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.949651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.949812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.949839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.949989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.950142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.950167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.950344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.950544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.950569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.950722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.950864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.950893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.951045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.951185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.951211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.951381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.951559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.951585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.951756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.951901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.951930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.952089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.952266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.952291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.952472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.952625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.952652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.952822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.952979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.953006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.953186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.953361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.953388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.953530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.953675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.953706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.953864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.954030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.954056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.954231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.954410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.954436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.954595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.954747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.954773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.954925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.955104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.955129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.955279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.955452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.955478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.955642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.955803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.955828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.955982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.956130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.956155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.956361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.956542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.956567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.956747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.956926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.956951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.957118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.957268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.957293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.957444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.957601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.957628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.957775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.957921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.957948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.958124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.958278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.958306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.958456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.958610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.958636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-05-12 07:06:53.958825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.958992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-05-12 07:06:53.959017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.959193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.959343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.959369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.959523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.959670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.959707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.959865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.960012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.960039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.960203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.960402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.960427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.960612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.960773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.960799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.960958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.961132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.961158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.961330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.961485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.961511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.961684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.961837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.961862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.962043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.962190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.962215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.962384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.962539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.962566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.962729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.962879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.962906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.963087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.963238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.963263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.963413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.963559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.963586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.963746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.963927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.963953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.964105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.964278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.964304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.964449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.964593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.964619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.964802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.964958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.964983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.965159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.965335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.965360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.965531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.965699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.965725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.965898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.966067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.966092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.966265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.966436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.966461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.966614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.966766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.966793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.966941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.967082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.967107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.967255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.967403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.967428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.967582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.967749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.967777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-05-12 07:06:53.967947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-05-12 07:06:53.968101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.968128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.968282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.968458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.968483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.968659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.968810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.968836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.969017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.969195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.969220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.969385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.969565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.969591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.969748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.969934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.969960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.970121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.970272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.970297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.970450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.970645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.970670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.970831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.971012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.971038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.971200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.971369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.971394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.971542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.971707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.971732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.971877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.972031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.972056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.972209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.972379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.972404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.972549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.972714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.972741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.972895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.973099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.973125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.973269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.973412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.973437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faca4000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.973606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.973779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.973807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.973979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.974154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.974180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.974336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.974508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.974535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.974710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.974874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.974899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.975051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.975201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.975227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.975406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.975583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.975608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.975781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.975936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.975960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.976113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.976296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.976322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.976476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.976669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.976702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.976879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.977051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.977077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.977222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.977381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.977407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.977585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.977743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.977769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.977918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.978071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.978096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-05-12 07:06:53.978243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.978438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-05-12 07:06:53.978464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.978635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.978791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.978817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.978969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.979142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.979166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.979319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.979468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.979494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.979644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.979838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.979864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.980021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.980188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.980214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.980383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.980559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.980583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.980743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.980897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.980924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.981079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.981248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.981272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.981420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.981594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.981619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.981779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.981931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.981957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.982126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.982299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.982324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.982501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.982643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.982668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.982877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.983032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.983056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.983259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.983437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.983461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.983635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.983791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.983817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.983976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.984147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.984172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.984325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.984474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.984499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.984650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.984804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.984830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.984996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.985164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.985188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.985367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.985514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.985540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.985688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.985875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.985900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.986050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.986214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.986242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.986401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.986548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.986573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.986724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.986872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.986897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.987047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.987192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.987217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.987397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.987565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.987590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.987765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.987977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.988002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.988162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.988313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.988340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.988503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.988654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.988678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-05-12 07:06:53.988838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-05-12 07:06:53.989040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.989066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.989250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.989424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.989450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.989597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.989753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.989780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.989939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.990109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.990135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.990322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.990467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.990492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.990640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.990790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.990815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.990973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.991152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.991177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.991345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.991490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.991520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.991714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.991865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.991890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.992051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.992225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.992250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.992429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.992573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.992599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.992758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.992933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.992959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.993130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.993323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.993348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.993507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.993682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.993713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.993896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.994081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.994107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.994282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.994436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.994462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.994614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.994796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.994823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.994981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.995140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.995171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.995339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.995493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.995519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.995701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.995857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.995881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.996040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.996229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.996254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.996430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.996584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.996609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.996763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.996944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.996976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.997132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.997315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.997347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.997494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.997651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.997676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.997831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.997976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.998001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.998154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.998309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.998334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.998513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.998671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.998709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.998886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.999063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.999088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-05-12 07:06:53.999262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.999437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-05-12 07:06:53.999462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:53.999611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:53.999797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:53.999824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.000013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.000169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.000201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.000370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.000529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.000554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.000728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.000865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.000890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.001048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.001198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.001222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.001365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.001528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.001553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.001700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.001883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.001908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.002100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.002255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.002284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.002456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.002611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.002637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.002787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.002940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.002975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.003163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.003310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.003341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.003515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.003684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.003716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.003883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.004042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.004066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.004227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.004387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.004413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.004615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.004769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.004796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.004976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.005153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.005178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.005344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.005519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.005545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.005725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.005871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.005896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.006064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.006242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.006266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.006443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.006587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.006612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.006807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.006951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.006977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.007152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.007306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.007332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.007500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.007673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.007715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.007864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.008021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.008047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.008204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.008384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.008410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-05-12 07:06:54.008601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-05-12 07:06:54.008780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.008807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.008988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.009161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.009185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.009337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.009494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.009521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.009681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.009836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.009863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.010049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.010205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.010229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.010385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.010556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.010581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.010764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.010940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.010965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.011146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.011292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.011318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.011469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.011647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.011673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.011834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.011979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.012004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.012154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.012349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.012374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.012547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.012701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.012727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.012882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.013031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.013056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.013222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.013378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.013404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.013563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.013745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.013771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.013954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.014097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.014122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.014327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.014491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.014517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.014670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.014855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.014880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.015049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.015196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.015222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.015420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.015574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.015599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.015756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.015898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.015923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.016066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.016214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.016239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.016447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.016591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.016616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.016769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.016965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.016990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.017143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.017322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.017347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.017522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.017670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.017702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.017858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.018006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.018032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.018193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.018371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.018398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.018576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.018723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.018749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-05-12 07:06:54.018957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.019135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-05-12 07:06:54.019160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.019313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.019465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.019490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.019645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.019809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.019834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.020009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.020186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.020211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.020398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.020570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.020595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.020782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.020935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.020961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.021134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.021310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.021335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.021519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.021674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.021705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.021888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.022030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.022055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.022226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.022373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.022398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.022601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.022759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.022785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.022943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.023124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.023149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.023324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.023506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.023531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.023724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.023907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.023933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.024099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.024281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.024308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.024462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.024607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.024632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.024812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.024966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.024991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.025175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.025341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.025367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.025548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.025741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.025767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.025938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.026085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.026112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.026262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.026456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.026482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.026634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.026841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.026868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.027019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.027223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.027249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.027428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.027578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.027603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7facac000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.027799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.027970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.027999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.028154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.028308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.028334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.028488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.028666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.028691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.028887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.029057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.029081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.029232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.029389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.029414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.029570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.029726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-05-12 07:06:54.029763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-05-12 07:06:54.029920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.030097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.030122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.030276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.030469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.030495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.030706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.030852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.030878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.031045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.031196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.031220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.031368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.031557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.031582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.031749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.031897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.031922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.032101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.032275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.032300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.032444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.032596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.032620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.032800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.032943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.032968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.033119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.033290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.033315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.033490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.033636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.033660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.033853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.034038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.034063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.034268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.034409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.034434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.034588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.034798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.034824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.035003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.035155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.035185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.035340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.035489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.035514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.035660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.035817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.035842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.035987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.036152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.036178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.036349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.036513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.036538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.036683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.036864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.036889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.037080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.037229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.037254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.037399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.037547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.037571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.037718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.037899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.037923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.038114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.038278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.038303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.038456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.038598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.038627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.038808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.038955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.038982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.039124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.039277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.039303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.039486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.039636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.039663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.039857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.040014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.040040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-05-12 07:06:54.040196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.040362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-05-12 07:06:54.040388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.040543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.040749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.040775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.040930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.041078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.041103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.041275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.041468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.041493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.041641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.041831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.041856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.042021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.042185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.042209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.042393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.042573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.042597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.042752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.042900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.042925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.043087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.043222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.043248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.043404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.043549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.043574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.043720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.043870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.043895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.044056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.044208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.044233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.044406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.044575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.044600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.044778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.044926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.044951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.045103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.045249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.045274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.045440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.045585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.045610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.045795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.045961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.045986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.046134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.046326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.046351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.046496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.046674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.046706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.046904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.047095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.047119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.047307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.047455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.047481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.047641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.047786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.047812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.047968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.048122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.048147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.048332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.048476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.048501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.048655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.048816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.048841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.048999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.049145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.049169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.049339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.049511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.049537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.049750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.049901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.049926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.050094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.050267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.050292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.050442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.050585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.050610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-05-12 07:06:54.050762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.050907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-05-12 07:06:54.050932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.051094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.051249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.051274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.051440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.051592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.051617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.051776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.051916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.051941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.052131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.052282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.052309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.052507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.052650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.052675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.052848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.053002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.053027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.053227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.053375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.053401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.053548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.053701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.053729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.053869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.054025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.054052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.054199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.054342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.054368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.054528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.054720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.054745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.054896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.055062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.055087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.055248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.055392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.055417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.055567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.055740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.055766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.055915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.056053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.056078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.056294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.056446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.056475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.056624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.056775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.056801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.056952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.057100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.057125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.057332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.057525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.057550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.057699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.057859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.057884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.058053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.058235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.058260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.058413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.058587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.058612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.058812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.059003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.059027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.059172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.059347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.059372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.059525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.059681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-05-12 07:06:54.059712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-05-12 07:06:54.059873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.060051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.060076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-05-12 07:06:54.060234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.060383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.060408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-05-12 07:06:54.060589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.060769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.060795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-05-12 07:06:54.060941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.061121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.061146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-05-12 07:06:54.061321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.061462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.061487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-05-12 07:06:54.061642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.061825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.061851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-05-12 07:06:54.062030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.062210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.062235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-05-12 07:06:54.062378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.062544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.062568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-05-12 07:06:54.062757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.062917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.062942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-05-12 07:06:54.063089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.063238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.063262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-05-12 07:06:54.063432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.063584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.063608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-05-12 07:06:54.063782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.063941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-05-12 07:06:54.063966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.064111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.064282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.064308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.064487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.064667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.064691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.064873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.065020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.065045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.065248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.065389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.065415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.065585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.065776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.065802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.065962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.066131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.066156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.066317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.066463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.066490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.066668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.066831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.066858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.067043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.067191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.067216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.067367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.067513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.067538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.067726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.067903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.067928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.068080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.068223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.068248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.068397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.068545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.068570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.068725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.068885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.068910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.069086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.069238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.069262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.069432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.069583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.069608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.069792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.069931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.069956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.070140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.070288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.070313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.070479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.070642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.070667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.070826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.070978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.071003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.071171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.071320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.071345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-05-12 07:06:54.071494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.071650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-05-12 07:06:54.071675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.071861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.072052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.072077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.072222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.072364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.072389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.072559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.072737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.072769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.072922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.073066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.073091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.073240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.073382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.073407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.073588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.073763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.073788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.073942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.074087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.074112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.074266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.074473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.074502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.074661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.074839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.074864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.075012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.075154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.075179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.075350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.075500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.075525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.075703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.075860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.075887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.076035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.076189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.076215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.076402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.076552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.076577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.076723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.076887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.076912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.077073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.077252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.077277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.077443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.077613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.077638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.077808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.077982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.078007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.078211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.078352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.078377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.078552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.078704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.078730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.078926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.079068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.079093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.079285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.079460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.079485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.079633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.079809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.079835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.079987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.080160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.080185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.080339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.080488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.080513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.080661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.080846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.080872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.081019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.081171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.081198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.081346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.081494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.081519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.081672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.081829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.081854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-05-12 07:06:54.082008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.259 [2024-05-12 07:06:54.082160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.082185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.082344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.082523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.082548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.082706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.082857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.082882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.083070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.083209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.083233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.083381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.083523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.083547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.083690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.083871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.083895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.084079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.084218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.084242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.084415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.084562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.084587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.084744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.084893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.084918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.085065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.085231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.085256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.085403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.085569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.085594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.085776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.085919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.085944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.086112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.086274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.086299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.086488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.086631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.086655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.086815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.086993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.087018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.087221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.087375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.087401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.087595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.087784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.087810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.087955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.088133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.088159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.088343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.088509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.088533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.088716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.088865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.088892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.089045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.089192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.089217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.089390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.089543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.089569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.089760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.089931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.089956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.090135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.090280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.090304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.090457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.090627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.090652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.090804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.090975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.091000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.091176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.091322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.091346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.091509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.091660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.091685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.091865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.092016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.092040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-05-12 07:06:54.092190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.092361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.260 [2024-05-12 07:06:54.092389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.092534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.092710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.092736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.092886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.093044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.093069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.093240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.093398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.093423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.093572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.093719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.093744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.093884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.094032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.094058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.094237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.094408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.094433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.094601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.094769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.094795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.094942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.095113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.095138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.095285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.095459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.095483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.095669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.095831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.095856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.096036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.096181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.096205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.096349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.096525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.096550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.096690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.096843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.096867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.097013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.097189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.097214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.097360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.097512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.097536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.097684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.097855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.097880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.098059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.098225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.098249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.098408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.098561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.098587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.098744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.098917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.098942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.099097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.099241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.099266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.099445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.099590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.099615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.099781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.099931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.099956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.100125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.100301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.100326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.100493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.100670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.100702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.100873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.101023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.101048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.101206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.101351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.101375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.101523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.101663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.101687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.101868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.102024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.102049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.102225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.102397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.261 [2024-05-12 07:06:54.102422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-05-12 07:06:54.102602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.102776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.102802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.102954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.103133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.103157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.103299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.103437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.103462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.103615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.103787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.103813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.103982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.104145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.104170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.104312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.104486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.104510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.104752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.104924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.104948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.105129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.105304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.105328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.105476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.105654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.105679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.105843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.106020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.106044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.106217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.106362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.106387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.106533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.106684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.106715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.106880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.107025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.107049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.107214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.107364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.107388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.107556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.107717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.107752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.107910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.108064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.108088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.108262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.108423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.108447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.108620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.108789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.108815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.108978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.109153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.109178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.109328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.109498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.109522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.109675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.109831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.109855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.110000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.110176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.110204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.110354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.110529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.110553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-05-12 07:06:54.110710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.262 [2024-05-12 07:06:54.110855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.110880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.111032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.111209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.111234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.111381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.111552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.111577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.111756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.111926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.111951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.112102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.112254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.112278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.112444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.112632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.112656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.112809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.112954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.112979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.113127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.113268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.113293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.113464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.113642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.113671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.113834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.113981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.114005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.114150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.114331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.114355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.114509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.114681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.114711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.114861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.115020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.115045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.115218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.115367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.115393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.115566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.115724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.115760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.115937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.116101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.116128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.116321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.116510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.116535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.116686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.116903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.116928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.117073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.117241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.117266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.117446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.117616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.117640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.117811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.117985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.118010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.118161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.118332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.118356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.118492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.118700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.118726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.118877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.119021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.119045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.119239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.119416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.119441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.119595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.119760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.119786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.119959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.120102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.120128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.120282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.120420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.120445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.120599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.120771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.120796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.263 qpair failed and we were unable to recover it. 00:26:47.263 [2024-05-12 07:06:54.120959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.263 [2024-05-12 07:06:54.121154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.121179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.121332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.121519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.121544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.121720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.121868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.121893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.122066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.122218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.122243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.122418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.122595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.122620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.122762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.122912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.122937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.123103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.123277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.123302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.123476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.123627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.123652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.123805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.124007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.124032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.124206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.124378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.124402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.124620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.124812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.124838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.125009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.125157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.125183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.125362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.125507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.125532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.125701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.125847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.125872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.126036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.126216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.126241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.126411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.126576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.126601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.126762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.126927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.126952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.127152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.127303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.127328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.127481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.127626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.127651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.127812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.127957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.127982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.128129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.128296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.128321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.128497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.128652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.128677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.128836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.128988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.129012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.129169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.129314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.129339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.129517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.129661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.129686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.129870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.130022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.130047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.130223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.130391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.130416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.130568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.130732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.130758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.130925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.131082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.131107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.131258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.131425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.264 [2024-05-12 07:06:54.131450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.264 qpair failed and we were unable to recover it. 00:26:47.264 [2024-05-12 07:06:54.131588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.131770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.131799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.131959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.132104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.132129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.132293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.132458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.132483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.132663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.132849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.132875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.133022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.133172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.133197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.133346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.133520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.133545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.133700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.133850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.133874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.134022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.134171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.134196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.134369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.134544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.134569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.134734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.134882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.134907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.135055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.135229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.135254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.135420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.135567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.135591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.135763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.135918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.135943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.136090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.136235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.136259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.136412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.136580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.136605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.136766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.136959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.136984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.137125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.137297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.137322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.137501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.137655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.137680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.137837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.137982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.138007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.138150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.138326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.138353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.138509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.138681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.138712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.138887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.139046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.139071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.139218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.139380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.139405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.139559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.139708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.139733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.139890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.140034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.140059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.140212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.140355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.140380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.140528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.140673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.140702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.140863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.141018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.141042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.141226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.141368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.141392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.265 [2024-05-12 07:06:54.141562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.141719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.265 [2024-05-12 07:06:54.141745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.265 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.141925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.142129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.142154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.142315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.142466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.142490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.142644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.142812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.142837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.142989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.143135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.143160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.143336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.143475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.143500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.143676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.143832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.143857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.144005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.144165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.144190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.144349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.144512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.144537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.144688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.144833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.144858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.145021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.145168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.145193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.145366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.145517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.145542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.145701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.145885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.145910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.146087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.146257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.146282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.146430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.146579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.146604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.146751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.146914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.146939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.147112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.147265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.147291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.147437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.147601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.147627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.147810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.147959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.147983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.148173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.148325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.148349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.148502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.148641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.148666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.148833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.148989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.149014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.149237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.149396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.149425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.149582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.149783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.149809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.149970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.150122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.150146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.150314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.150463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.150488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.150666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.150831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.150857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.151025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.151169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.151193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.151337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.151487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.151512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.151683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.151868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.151894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.266 qpair failed and we were unable to recover it. 00:26:47.266 [2024-05-12 07:06:54.152041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.266 [2024-05-12 07:06:54.152201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.152226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.152371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.152569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.152594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.152746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.152889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.152914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.153125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.153275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.153300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.153447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.153617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.153642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.153816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.153991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.154015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.154162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.154299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.154324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.154505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.154684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.154720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.154896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.155065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.155090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.155242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.155383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.155408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.155556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.155704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.155730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.155899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.156050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.156074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.156225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.156401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.156426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.156606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.156785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.156811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.156968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.157122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.157147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.157293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.157436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.157461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.157613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.157794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.157820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.157970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.158160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.158185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.158334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.158477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.158501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.158647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.158800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.158825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.158978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.159152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.159176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.159350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.159494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.159519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.159692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.159845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.159870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.160061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.160210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.160235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.160414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.160578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.267 [2024-05-12 07:06:54.160603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.267 qpair failed and we were unable to recover it. 00:26:47.267 [2024-05-12 07:06:54.160753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.160905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.160930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.161074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.161217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.161241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.161419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.161556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.161580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.161731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.161883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.161908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.162061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.162206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.162231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.162411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.162566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.162590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.162765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.162936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.162961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.163111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.163254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.163279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.163449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.163631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.163656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.163808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.163962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.163987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.164141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.164292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.164318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.164486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.164651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.164676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.164845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.164993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.165017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.165171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.165346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.165370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.165525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.165668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.165693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.165852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.165990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.166015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.166161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.166300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.166325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.166474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.166649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.166674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.166844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.167014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.167042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.167217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.167360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.167384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.167553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.167705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.167730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.167884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.168030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.168056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.168257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.168406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.168431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.168617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.168771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.168797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.168945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.169120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.169144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.169309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.169455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.169479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.169646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.169804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.169829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.169983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.170128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.170153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.170296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.170478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.170509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.268 [2024-05-12 07:06:54.170661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.170819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.268 [2024-05-12 07:06:54.170844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.268 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.170995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.171158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.171182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.171349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.171502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.171527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.171678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.171849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.171874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.172048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.172205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.172230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.172386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.172531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.172556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.172709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.172861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.172887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.173052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.173205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.173231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.173408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.173555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.173579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.173764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.173937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.173962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.174118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.174267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.174292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.174444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.174600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.174626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.174780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.174952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.174977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.175159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.175302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.175327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.175520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.175664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.175688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.175854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.176001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.176025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.176162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.176313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.176338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.176535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.176689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.176720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.176866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.177012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.177037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.177206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.177378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.177403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.177609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.177753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.177778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.177955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.178121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.178146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.178311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.178469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.178494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.178671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.178825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.178850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.179026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.179196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.179220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.179396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.179567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.179592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.179788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.179936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.179961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.180125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.180276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.180301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.180484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.180623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.180648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.180814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.180991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.181016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.269 qpair failed and we were unable to recover it. 00:26:47.269 [2024-05-12 07:06:54.181185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.269 [2024-05-12 07:06:54.181368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.181394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.181545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.181710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.181736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.181912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.182092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.182117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.182287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.182440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.182464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.182640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.182800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.182826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.183010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.183156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.183181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.183335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.183554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.183579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.183758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.183930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.183955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.184104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.184241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.184266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.184411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.184568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.184593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.184745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.184912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.184937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.185113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.185314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.185339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.185516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.185720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.185746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.185896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.186074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.186099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.186276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.186420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.186445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.186620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.186801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.186826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.186974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.187154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.187179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.187355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.187529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.187555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.187718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.187876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.187902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.188048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.188190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.188216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.188371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.188519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.188549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.188731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.188907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.188932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.189112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.189264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.189289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.189494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.189665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.189689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.189854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.190001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.190026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.190178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.190354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.190379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.190574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.190738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.190763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.190915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.191059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.191084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.191228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.191398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.191422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.191591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.191767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.191792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.270 qpair failed and we were unable to recover it. 00:26:47.270 [2024-05-12 07:06:54.191952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.270 [2024-05-12 07:06:54.192129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.192154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.192301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.192443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.192468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.192652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.192836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.192861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.193009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.193180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.193205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.193380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.193518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.193543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.193704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.193851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.193875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.194044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.194220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.194244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.194393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.194539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.194565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.194746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.194924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.194950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.195091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.195241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.195266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.195439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.195587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.195612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.195768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.195920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.195945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.196122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.196269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.196295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.196474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.196621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.196646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.196835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.197011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.197035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.197185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.197355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.197380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.197562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.197713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.197738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.197949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.198100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.198125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.198317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.198480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.198504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.198685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.198861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.198886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.199055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.199218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.199243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.199422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.199596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.199621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.199773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.199922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.199947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.200129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.200326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.200350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.271 [2024-05-12 07:06:54.200509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.200658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.271 [2024-05-12 07:06:54.200682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.271 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.200862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.201062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.201087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.201236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.201410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.201435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.201586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.201760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.201786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.201967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.202138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.202162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.202340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.202488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.202513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.202664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.202847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.202872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.203042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.203222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.203247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.203413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.203558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.203583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.203751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.203900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.203925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.204067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.204217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.204241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.204400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.204558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.204582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.204785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.204936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.204960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.205134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.205285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.205309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.205460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.205607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.205632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.205804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.205978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.206003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.206181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.206324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.206350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.206524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.206673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.206710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.206896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.207065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.207089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.207249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.207429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.207454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.207646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.207807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.207832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.208003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.208176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.208201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.208347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.208493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.208517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.208700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.208853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.208878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.209026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.209202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.209227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.209374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.209547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.209572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.209748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.209920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.209945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.210102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.210252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.210276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.210433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.210580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.210605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.210760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.210921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.210946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.272 qpair failed and we were unable to recover it. 00:26:47.272 [2024-05-12 07:06:54.211147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.272 [2024-05-12 07:06:54.211294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.211318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.211512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.211678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.211709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.211882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.212058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.212082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.212257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.212408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.212434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.212613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.212774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.212800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.212948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.213119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.213144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.213322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.213504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.213529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.213668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.213826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.213851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.214041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.214183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.214207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.214368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.214534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.214558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.214753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.214898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.214922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.215072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.215250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.215274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.215425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.215575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.215601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.215755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.215928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.215953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.216100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.216242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.216267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.216420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.216593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.216618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.216769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.216920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.216946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.217095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.217273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.217298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.217447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.217617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.217642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.217790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.217966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.217991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.218170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.218345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.218371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.218530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.218717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.218743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.218914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.219058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.219083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.219236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.219383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.219409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.219584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.219743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.219769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.219920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.220069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.220094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.220243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.220393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.220418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.220590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.220743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.220769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.220926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.221089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.221115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.221299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.221450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.273 [2024-05-12 07:06:54.221474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.273 qpair failed and we were unable to recover it. 00:26:47.273 [2024-05-12 07:06:54.221628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.221776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.221801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.221973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.222119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.222144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.222327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.222490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.222514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.222718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.222884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.222909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.223060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.223244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.223269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.223444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.223594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.223618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.223787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.223960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.223984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.224162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.224338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.224363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.224516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.224679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.224717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.224891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.225037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.225062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.225234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.225413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.225438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.225611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.225786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.225812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.225959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.226134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.226158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.226303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.226451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.226475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.226663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.226818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.226843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.227008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.227184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.227209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.227398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.227569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.227593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.227774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.227942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.227967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.228125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.228367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.228391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.228561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.228713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.228738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.228880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.229051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.229075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.229220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.229358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.229382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.229555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.229710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.229736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.229905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.230056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.230080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.230256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.230439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.230464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.230643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.230823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.230848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.230996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.231143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.231168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.231320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.231520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.231545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.231694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.231843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.231868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.274 [2024-05-12 07:06:54.232045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.232215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.274 [2024-05-12 07:06:54.232239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.274 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.232409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.232582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.232607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.232762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.232937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.232962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.233115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.233257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.233282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.233421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.233567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.233592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.233740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.233891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.233915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.234089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.234244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.234269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.234436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.234588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.234613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.234764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.234936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.234960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.235134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.235322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.235346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.235528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.235676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.235706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.235857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.236013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.236038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.236177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.236348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.236373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.236548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.236690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.236720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.236893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.237064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.237089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.237263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.237415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.237441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.237611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.237787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.237813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.238008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.238180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.238205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.238388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.238555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.238580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.238723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.238896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.238921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.239089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.239267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.239291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.239466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.239640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.239665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.239831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.239986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.240011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.240157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.240335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.240359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.240499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.240717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.240742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.240895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.241042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.241067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.241242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.241425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.241449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.241594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.241745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.241770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.241915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.242085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.242109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.242264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.242416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.275 [2024-05-12 07:06:54.242442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.275 qpair failed and we were unable to recover it. 00:26:47.275 [2024-05-12 07:06:54.242591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.242738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.242768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.242947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.243125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.243149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.243305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.243472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.243497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.243641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.243793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.243820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.243978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.244154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.244179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.244330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.244482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.244506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.244680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.244840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.244866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.245066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.245211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.245236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.245392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.245556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.245581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.245744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.245920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.245945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.246092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.246274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.246303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.246482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.246659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.246683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.246862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.247011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.247038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.247218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.247355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.247380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.247556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.247714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.247740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.247886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.248055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.248080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.248260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.248398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.248423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.248581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.248732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.248758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.248933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.249075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.249099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.249247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.249393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.249417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.249591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.249756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.249782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.249956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.250110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.250135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.250308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.250450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.250474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.250614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.250792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.250817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.250970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.251149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.251174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.276 qpair failed and we were unable to recover it. 00:26:47.276 [2024-05-12 07:06:54.251344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.276 [2024-05-12 07:06:54.251507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.251531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.251684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.251845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.251870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.252041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.252181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.252205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.252358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.252504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.252529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.252684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.252857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.252881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.253064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.253219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.253244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.253399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.253569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.253594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.253774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.253927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.253952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.254109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.254267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.254292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.254442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.254587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.254612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.254788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.254929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.254954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.255120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.255294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.255319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.255482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.255672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.255712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.255867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.256068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.256092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.256264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.256429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.256453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.256631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.256779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.256804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.256966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.257125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.257149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.257294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.257447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.257472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.257614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.257785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.257810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.257981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.258146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.258171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.258351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.258516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.258540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.258704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.258882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.258908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.259087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.259250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.259274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.259422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.259605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.259632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.259781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.259931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.259958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.260152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.260298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.260324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.260477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.260630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.260654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.260810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.260986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.261010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.261185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.261333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.261357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.261539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.261718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.277 [2024-05-12 07:06:54.261744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.277 qpair failed and we were unable to recover it. 00:26:47.277 [2024-05-12 07:06:54.261895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.262038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.262062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.262227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.262406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.262430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.262582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.262733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.262758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.262937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.263081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.263106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.263248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.263390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.263414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.263552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.263734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.263759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.263920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.264093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.264122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.264275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.264445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.264469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.264640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.264781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.264806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.264961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.265112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.265137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.265308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.265459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.265483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.265629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.265812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.265837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.266010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.266154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.266179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.266326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.266523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.266548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.266688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.266839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.266864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.267009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.267179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.267204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.267372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.267559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.267584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.267763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.267911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.267935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.268111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.268288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.268312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.268454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.268596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.268620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.268772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.268919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.268944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.269097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.269273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.269297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.269443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.269592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.269617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.269793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.269970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.269996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.270177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.270326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.270351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.270526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.270681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.270711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.270867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.271064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.271089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.271250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.271390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.271416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.271621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.271786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.271811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.278 [2024-05-12 07:06:54.271961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.272144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.278 [2024-05-12 07:06:54.272169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.278 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.272347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.272493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.272518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.272686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.272842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.272867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.273048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.273214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.273239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.273414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.273576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.273600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.273764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.273912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.273938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.274139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.274313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.274338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.274510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.274686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.274717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.274889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.275067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.275092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.275282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.275450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.275475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.275622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.275783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.275808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.275984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.276134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.276159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.276303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.276444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.276469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.276647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.276799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.276826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.277004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.277147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.277171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.277323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.277496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.277521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.277686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.277853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.277878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.278051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.278193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.278219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.278413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.278564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.278590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.278743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.278917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.278942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.279091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.279236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.279260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.279408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.279592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.279616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.279784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.279944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.279969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.280105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.280258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.280283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.280456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.280605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.280631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.280794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.280948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.280973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.281136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.281319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.281344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.281490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.281656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.281681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.281841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.281992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.282020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.282174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.282353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.282378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.279 qpair failed and we were unable to recover it. 00:26:47.279 [2024-05-12 07:06:54.282553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.282732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.279 [2024-05-12 07:06:54.282758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.282935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.283112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.283136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.283280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.283424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.283448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.283609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.283771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.283796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.283967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.284138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.284162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.284365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.284541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.284566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.284732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.284901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.284926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.285096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.285242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.285266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.285433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.285607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.285632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.285839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.285985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.286010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.286155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.286332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.286357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.286518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.286662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.286689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.286857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.287045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.287071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.287225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.287377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.287402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.287547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.287723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.287749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.287886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.288081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.288106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.288251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.288428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.288453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.288602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.288786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.288812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.288983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.289153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.289177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.289331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.289481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.289507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.289650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.289796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.289822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.289972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.290161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.290186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.290364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.290542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.290567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.290721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.290866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.290890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.291046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.291192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.291217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.280 [2024-05-12 07:06:54.291390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.291560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.280 [2024-05-12 07:06:54.291584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.280 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.291738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.291890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.291915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.292070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.292243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.292268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.292414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.292588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.292612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.292770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.292925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.292951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.293155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.293331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.293358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.293531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.293683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.293714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.293893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.294063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.294089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.294252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.294446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.294472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.294648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.294800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.294827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.295006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.295214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.295239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.295396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.295537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.295562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.295727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.295889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.295914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.296085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.296231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.296255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.296407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.296591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.296616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.296803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.296976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.297001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.297144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.297322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.297347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.297552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.297715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.297741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.297921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.298064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.298090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.298268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.298421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.298446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.298594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.298741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.298768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.298942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.299117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.299141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.299312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.299501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.299525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.299679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.299843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.299869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.300013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.300163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.300191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.300364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.300504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.300529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.300681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.300865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.300890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.301077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.301251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.301276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.301416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.301566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.301591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.301763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.301951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.301975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.281 qpair failed and we were unable to recover it. 00:26:47.281 [2024-05-12 07:06:54.302120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.281 [2024-05-12 07:06:54.302282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.302306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.302484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.302631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.302656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.302808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.302956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.302981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.303121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.303265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.303289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.303480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.303650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.303679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.303854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.304059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.304084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.304277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.304419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.304445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.304637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.304793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.304819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.304990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.305164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.305188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.305369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.305518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.305543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.305683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.305851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.305877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.306035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.306179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.306203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.306364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.306537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.306562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.306742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.306883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.306908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.307126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.307295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.307320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.307516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.307685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.307723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.307888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.308056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.308081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.308226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.308373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.308397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.308546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.308702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.308728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.308900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.309061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.309086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.309259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.309401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.309426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.309576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.309732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.309759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.309910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.310050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.310074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.310230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.310394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.310419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.310568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.310747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.310773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.310926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.311131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.311156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.311326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.311495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.311520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.311691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.311876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.311901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.312047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.312224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.312249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.312424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.312602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.312626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.282 qpair failed and we were unable to recover it. 00:26:47.282 [2024-05-12 07:06:54.312770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.282 [2024-05-12 07:06:54.312919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.312944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.313139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.313284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.313309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.313457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.313603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.313628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.313800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.313953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.313978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.314134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.314302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.314326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.314465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.314636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.314661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.314834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.315012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.315037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.315206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.315370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.315395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.315570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.315722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.315748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.315893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.316051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.316077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.316222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.316391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.316416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.316587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.316758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.316783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.316937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.317086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.317111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.317252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.317394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.317419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.317614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.317790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.317815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.317962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.318142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.318167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.318346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.318488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.318513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.318658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.318824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.318849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.318998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.319160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.319185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.319365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.319539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.319563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.319740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.319896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.319923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.320091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.320302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.320327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.320473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.320652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.320676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.320852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.321035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.321059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.321234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.321381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.321406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.321550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.321740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.321769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.321916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.322072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.322097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.322268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.322464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.322490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.322638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.322788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.322814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.322991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.323138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.283 [2024-05-12 07:06:54.323162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.283 qpair failed and we were unable to recover it. 00:26:47.283 [2024-05-12 07:06:54.323338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.323483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.323508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.323685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.323875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.323900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.324049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.324221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.324245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.324418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.324603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.324628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.324784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.324934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.324960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.325112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.325316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.325341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.325488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.325641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.325667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.325842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.326045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.326069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.326241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.326420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.326445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.326590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.326748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.326773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.326914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.327058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.327082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.327260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.327436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.327461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.327607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.327780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.327805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.327984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.328125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.328149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.328292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.328443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.328468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.328647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.328811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.328837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.329010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.329185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.329210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.329352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.329516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.329541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.329723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.329866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.329891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.330044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.330221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.330246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.330416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.330607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.330631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.330803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.330976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.331001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.331175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.331312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.331337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.331487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.331662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.331686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.331877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.332027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.332052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.332207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.332386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.332411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.332584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.332733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.332759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.332934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.333097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.333122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.333294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.333446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.333470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.284 [2024-05-12 07:06:54.333623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.333770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.284 [2024-05-12 07:06:54.333795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.284 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.333972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.334142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.334167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.334318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.334483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.334509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.334686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.334844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.334869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.335026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.335177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.335203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.335353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.335527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.335552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.335724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.335891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.335916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.336066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.336247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.336272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.336441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.336619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.336644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.336817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.336991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.337017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.337191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.337367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.337392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.337570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.337716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.337741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.337883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.338054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.338078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.338253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.338403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.338428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.338589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.338741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.338767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.338941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.339084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.339109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.339258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.339415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.339440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.339586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.339797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.339826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.339989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.340186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.340212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.340389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.340535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.340561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.340731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.340876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.340902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.341055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.341254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.341279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.341423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.341591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.341616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.341790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.341937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.341962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.342136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.342289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.342314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.285 [2024-05-12 07:06:54.342486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.342626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.285 [2024-05-12 07:06:54.342650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.285 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.342798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.342946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.342971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.343151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.343301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.343327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.343503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.343692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.343722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.343879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.344030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.344055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.344205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.344379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.344403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.344557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.344719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.344744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.344926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.345070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.345094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.345244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.345390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.345415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.345557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.345732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.345757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.345895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.346039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.346064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.346241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.346423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.346447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.346592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.346768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.346793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.346976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.347152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.347176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.347349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.347528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.347552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.347693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.347845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.347870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.348008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.348155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.348181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.348352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.348518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.348543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.348724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.348873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.348897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.349049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.349218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.349243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.349428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.349606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.349632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.349792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.349981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.350006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.350182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.350352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.350377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.350558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.350708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.350733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.350884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.351059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.351084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.351232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.351393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.351418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.351584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.351738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.351764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.351959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.352109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.352134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.352287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.352428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.352452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.352599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.352757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.352785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.286 qpair failed and we were unable to recover it. 00:26:47.286 [2024-05-12 07:06:54.352940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.286 [2024-05-12 07:06:54.353125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.353150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.353327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.353515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.353540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.353724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.353879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.353904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.354075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.354225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.354250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.354396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.354555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.354579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.354768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.354918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.354943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.355131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.355311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.355336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.355486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.355660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.355684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.355855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.356025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.356049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.356200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.356402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.356427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.356566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.356769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.356796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.356944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.357109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.357134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.357313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.357490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.357515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.357687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.357844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.357873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.358043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.358228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.358252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.358456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.358631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.358655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.358805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.358961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.358985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.359145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.359289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.359315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.359520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.359702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.359728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.359884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.360068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.360093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.360259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.360431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.360456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.360593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.360772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.360798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.360949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.361102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.361127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.361299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.361457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.361482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.361651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.361812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.361837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.362013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.362165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.362200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.362351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.362523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.362548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.362700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.362851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.362878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.363017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.363172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.363197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.363345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.363494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.363519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.363712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.363876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.363901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.364066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.364216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.364240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.287 qpair failed and we were unable to recover it. 00:26:47.287 [2024-05-12 07:06:54.364391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.287 [2024-05-12 07:06:54.364530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.364554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.364709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.364848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.364873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.365037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.365233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.365258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.365442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.365590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.365615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.365764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.365910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.365935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.366104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.366275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.366306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.366482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.366663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.366688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.366864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.367019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.367044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.367205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.367383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.367408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.367576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.367751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.367776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.367942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.368087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.368112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.368289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.368433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.368457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.368633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.368824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.368849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.288 [2024-05-12 07:06:54.369021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.369173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.288 [2024-05-12 07:06:54.369197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.288 qpair failed and we were unable to recover it. 00:26:47.563 [2024-05-12 07:06:54.369366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.369512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.369537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.563 qpair failed and we were unable to recover it. 00:26:47.563 [2024-05-12 07:06:54.369686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.369847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.369872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.563 qpair failed and we were unable to recover it. 00:26:47.563 [2024-05-12 07:06:54.370045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.370190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.370215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.563 qpair failed and we were unable to recover it. 00:26:47.563 [2024-05-12 07:06:54.370408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.370598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.370623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.563 qpair failed and we were unable to recover it. 00:26:47.563 [2024-05-12 07:06:54.370772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.370923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.370947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.563 qpair failed and we were unable to recover it. 00:26:47.563 [2024-05-12 07:06:54.371124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.371276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.371301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.563 qpair failed and we were unable to recover it. 00:26:47.563 [2024-05-12 07:06:54.371451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.371605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.371631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.563 qpair failed and we were unable to recover it. 00:26:47.563 [2024-05-12 07:06:54.371817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.371974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.372000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.563 qpair failed and we were unable to recover it. 00:26:47.563 [2024-05-12 07:06:54.372175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.372363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.372388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.563 qpair failed and we were unable to recover it. 00:26:47.563 [2024-05-12 07:06:54.372530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.372723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.563 [2024-05-12 07:06:54.372749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.563 qpair failed and we were unable to recover it. 00:26:47.563 [2024-05-12 07:06:54.372898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.373045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.373070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.373222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.373391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.373415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.373555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.373716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.373742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.373917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.374068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.374092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.374262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.374407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.374432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.374571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.374738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.374764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.374903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.375047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.375073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.375258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.375431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.375456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.375611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.375769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.375798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.375940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.376093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.376118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.376271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.376417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.376443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.376624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.376780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.376807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.376950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.377122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.377147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.377294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.377491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.377515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.377691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.377857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.377882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.378055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.378243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.378267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.378423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.378594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.378618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.378794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.378941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.378966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.379168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.379307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.379335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.379515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.379667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.379707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.379852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.380004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.380030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.380175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.380324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.380349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.380530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.380719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.380745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.380897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.381035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.381060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.381205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.381374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.381399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.381571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.381720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.381746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.381890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.382035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.382060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.382230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.382375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.382400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.382603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.382789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.382815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.564 [2024-05-12 07:06:54.382995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.383139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.564 [2024-05-12 07:06:54.383163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.564 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.383304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.383473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.383497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.383639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.383795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.383820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.384000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.384168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.384192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.384378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.384549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.384573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.384731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.384900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.384925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.385092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.385261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.385286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.385478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.385647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.385671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.385828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.385978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.386005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.386166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.386313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.386338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.386520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.386704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.386730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.386905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.387053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.387077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.387250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.387400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.387425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.387567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.387720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.387745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.387894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.388065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.388089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.388235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.388378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.388402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.388574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.388748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.388774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.388942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.389143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.389168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.389365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.389533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.389557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.389723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.389891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.389916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.390089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.390259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.390284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.390428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.390594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.390619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.390806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.390961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.390985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.391157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.391327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.391351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.391528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.391721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.391747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.391920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.392100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.392124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.392271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.392411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.392436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.392572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.392740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.392765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.392970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.393114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.393138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.393282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.393428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.393453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.565 qpair failed and we were unable to recover it. 00:26:47.565 [2024-05-12 07:06:54.393606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.565 [2024-05-12 07:06:54.393774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.393801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.393947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.394095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.394119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.394311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.394466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.394490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.394641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.394809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.394834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.395007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.395211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.395235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.395380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.395526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.395551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.395727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.395874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.395899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.396104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.396272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.396297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.396476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.396615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.396639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.396811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.396968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.396992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.397155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.397304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.397333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.397496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.397641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.397666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.397823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.397967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.397992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.398163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.398301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.398326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.398503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.398641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.398665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.398845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.399022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.399047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.399226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.399371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.399396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.399573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.399750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.399776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.399949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.400117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.400141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.400291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.400432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.400456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.400637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.400821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.400846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.401037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.401180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.401204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.401361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.401524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.401548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.401735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.401890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.401917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.402096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.402276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.402302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.402450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.402591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.402616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.402778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.402955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.402981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.403152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.403352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.403376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.403549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.403718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.403743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.566 [2024-05-12 07:06:54.403888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.404032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.566 [2024-05-12 07:06:54.404057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.566 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.404235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.404414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.404439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.404588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.404767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.404793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.404939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.405112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.405137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.405314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.405481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.405506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.405668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.405836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.405861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.406002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.406182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.406207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.406362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.406523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.406547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.406740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.406891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.406917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.407098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.407275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.407301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.407455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.407620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.407645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.407807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.407981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.408006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.408201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.408378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.408403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.408594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.408772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.408798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.408981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.409122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.409147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.409293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.409440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.409465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.409605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.409785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.409811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.409961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.410163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.410188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.410339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.410485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.410509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.410690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.410843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.410868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.411034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.411201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.411225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.411392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.411587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.411611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.411783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.411949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.411974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.412176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.412356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.412381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.412540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.412705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.412739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.412893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.413073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.413098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.413248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.413448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.413473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.567 [2024-05-12 07:06:54.413648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.413800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.567 [2024-05-12 07:06:54.413825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.567 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.413971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.414141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.414167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.414318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.414462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.414486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.414664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.414845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.414871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.415064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.415203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.415227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.415372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.415514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.415543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.415690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.415878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.415903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.416086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.416263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.416288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.416432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.416607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.416633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.416806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.416962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.416991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.417142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.417311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.417336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.417482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.417659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.417684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.417843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.418024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.418049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.418222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.418384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.418409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.418593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.418758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.418783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.418939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.419139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.419164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.419313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.419450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.419475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.419647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.419831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.419857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.420031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.420229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.420254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.420437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.420581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.420605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.420789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.420950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.420975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.421153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.421343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.421368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.421511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.421699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.421724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.421888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.422030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.568 [2024-05-12 07:06:54.422056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.568 qpair failed and we were unable to recover it. 00:26:47.568 [2024-05-12 07:06:54.422221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.422394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.422419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.422584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.422796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.422822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.423016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.423164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.423196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.423380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.423552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.423577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.423761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.423930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.423955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.424103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.424279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.424304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.424494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.424662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.424686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.424855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.425029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.425054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.425195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.425394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.425419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.425597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.425749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.425775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.425956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.426107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.426134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.426310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.426484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.426509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.426686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.426836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.426861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.427009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.427155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.427180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.427327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.427500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.427525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.427693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.427863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.427888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.428055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.428219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.428244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.428415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.428557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.428582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.428744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.428888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.428913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.429088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.429267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.429292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.429506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.429672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.429702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.429879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.430034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.430058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.430208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.430357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.430383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.430568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.430725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.430751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.430929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.431078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.431103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.431267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.431443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.431468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.431641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.431792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.431817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.431985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.432163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.432188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.432333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.432481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.432506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.569 qpair failed and we were unable to recover it. 00:26:47.569 [2024-05-12 07:06:54.432679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.432881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.569 [2024-05-12 07:06:54.432906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.433081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.433254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.433280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.433430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.433577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.433603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.433755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.433910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.433940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.434099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.434303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.434328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.434467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.434606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.434631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.434779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.434932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.434956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.435132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.435295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.435319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.435496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.435681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.435713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.435887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.436061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.436085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.436257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.436430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.436455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.436629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.436810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.436835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.437006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.437181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.437206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.437399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.437553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.437583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.437755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.437906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.437931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.438116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.438295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.438320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.438467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.438626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.438651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.438832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.439034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.439058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.439207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.439381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.439406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.439560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.439730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.439756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.439901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.440105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.440130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.440306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.440451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.440475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.440644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.440825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.440850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.440996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.441182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.441207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.441392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.441536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.441561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.441717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.441876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.441902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.442075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.442252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.442276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.442482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.442631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.442655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.442807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.442966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.442991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.443129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.443301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.443326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.570 qpair failed and we were unable to recover it. 00:26:47.570 [2024-05-12 07:06:54.443508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.570 [2024-05-12 07:06:54.443648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.443673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.443826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.444006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.444030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.444173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.444320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.444346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.444506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.444685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.444715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.444874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.445036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.445061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.445267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.445418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.445443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.445634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.445789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.445815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.445969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.446138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.446163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.446338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.446499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.446524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.446701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.446847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.446873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.447065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.447241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.447266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.447430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.447620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.447645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.447847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.448024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.448049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.448205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.448350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.448376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.448535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.448705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.448731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.448905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.449062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.449087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.449268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.449412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.449438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.449581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.449732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.449757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.449939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.450107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.450132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.450309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.450456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.450481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.450631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.450817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.450842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.450990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.451159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.451184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.451361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.451506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.451531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.451722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.451896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.451922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.452066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.452251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.452275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.452444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.452617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.452641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.452819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.452962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.452986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.453166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.453308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.453333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.453513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.453654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.453679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.453861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.454012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.571 [2024-05-12 07:06:54.454036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.571 qpair failed and we were unable to recover it. 00:26:47.571 [2024-05-12 07:06:54.454186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.454324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.454349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.454491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.454638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.454664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.454843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.454993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.455019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.455197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.455369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.455395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.455559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.455736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.455765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.455912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.456088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.456113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.456292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.456442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.456468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.456613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.456785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.456811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.456959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.457130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.457154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.457355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.457523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.457547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.457722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.457886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.457911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.458089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.458253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.458278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.458448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.458637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.458661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.458851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.459005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.459030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.459184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.459359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.459384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.459539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.459717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.459743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.459896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.460046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.460070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.460226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.460403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.460429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.460602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.460754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.460779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.460952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.461099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.461125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.461315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.461484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.461509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.461681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.461864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.461888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.462035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.462234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.462259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.462408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.462570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.462596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.462813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.462961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.572 [2024-05-12 07:06:54.462994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.572 qpair failed and we were unable to recover it. 00:26:47.572 [2024-05-12 07:06:54.463150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.463302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.463326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.463471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.463671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.463708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.463862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.464010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.464034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.464219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.464387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.464411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.464579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.464740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.464766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.464913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.465073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.465097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.465249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.465424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.465448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.465639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.465798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.465822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.466012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.466168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.466193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.466340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.466495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.466519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.466665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.466833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.466858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.466998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.467174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.467199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.467346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.467487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.467512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.467675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.467840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.467864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.468029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.468219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.468244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.468403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.468605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.468630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.468794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.468967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.468992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.469172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.469319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.469344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.469521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.469690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.469718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.469866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.470033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.470058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.470233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.470405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.470429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.470592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.470798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.470824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.470982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.471121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.471146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.471326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.471475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.471500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.471692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.471887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.471912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.472117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.472296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.472320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.472480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.472651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.472675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.472823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.472987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.473012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.473175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.473346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.473370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.473512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.473656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.573 [2024-05-12 07:06:54.473680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.573 qpair failed and we were unable to recover it. 00:26:47.573 [2024-05-12 07:06:54.473837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.474000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.474030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.474201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.474341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.474367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.474567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.474755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.474781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.474931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.475074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.475099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.475273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.475421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.475448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.475599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.475746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.475771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.475944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.476109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.476133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.476310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.476494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.476518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.476659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.476841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.476866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.477006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.477178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.477203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.477373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.477518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.477542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.477723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.477897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.477921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.478063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.478222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.478247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.478388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.478542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.478567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.478722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.478902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.478926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.479074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.479214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.479239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.479402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.479548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.479573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.479765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.479913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.479939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.480111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.480286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.480312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.480506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.480683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.480712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.480888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.481048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.481074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.481257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.481429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.481453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.481622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.481797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.481822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.481997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.482140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.482164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.482369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.482543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.482567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.482732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.482902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.482927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.483084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.483261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.483286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.483442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.483623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.483647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.483833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.483990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.484017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.574 qpair failed and we were unable to recover it. 00:26:47.574 [2024-05-12 07:06:54.484164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.574 [2024-05-12 07:06:54.484340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.484365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.484515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.484658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.484683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.484847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.485019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.485043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.485187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.485339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.485365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.485537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.485718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.485744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.485905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.486082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.486107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.486295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.486440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.486465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.486632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.486785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.486810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.486985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.487157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.487182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.487352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.487500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.487525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.487670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.487843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.487868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.488046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.488221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.488246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.488425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.488628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.488653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.488862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.489013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.489038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.489202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.489368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.489394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.489557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.489735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.489760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.489928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.490084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.490108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.490284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.490427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.490452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.490605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.490758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.490783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.490957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.491101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.491125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.491302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.491445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.491471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.491643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.491838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.491863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.492029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.492175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.492204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.492347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.492489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.492514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.492704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.492876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.492900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.493062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.493209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.493234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.493376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.493556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.493582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.493767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.493930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.493954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.494131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.494279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.494303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.494450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.494615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.575 [2024-05-12 07:06:54.494639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.575 qpair failed and we were unable to recover it. 00:26:47.575 [2024-05-12 07:06:54.494881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.495029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.495054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.495204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.495372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.495397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.495536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.495712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.495736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.495908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.496049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.496074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.496256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.496394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.496418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.496557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.496716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.496740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.496895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.497041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.497065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.497241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.497389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.497414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.497589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.497745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.497771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.497939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.498143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.498168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.498349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.498520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.498544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.498690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.498843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.498867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.499028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.499227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.499251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.499430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.499596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.499620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.499800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.499969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.499994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.500167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.500338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.500362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.500569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.500710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.500736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.500906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.501050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.501074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.501225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.501408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.501432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.501599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.501740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.501764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.501953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.502101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.502126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.502273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.502425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.502449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.502623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.502813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.502839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.502989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.503134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.503159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.503328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.503498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.503522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.503685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.503859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.503884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.504043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.504218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.504242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.504422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.504575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.504599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.504773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.504926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.504951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.576 [2024-05-12 07:06:54.505112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.505263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.576 [2024-05-12 07:06:54.505287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.576 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.505459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.505602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.505626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.505800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.505948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.505973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.506113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.506292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.506316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.506486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.506647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.506671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.506868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.507034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.507059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.507227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.507418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.507442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.507597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.507750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.507775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.507953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.508138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.508162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.508312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.508453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.508477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.508639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.508811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.508837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.509015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.509207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.509232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.509424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.509598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.509622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.509800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.509964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.509988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.510168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.510342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.510371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.510543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.510713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.510738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.510915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.511096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.511121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.511276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.511450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.511474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.511629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.511840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.511866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.512011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.512150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.512175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.512335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.512482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.512508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.512687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.512949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.512974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.513142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.513316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.513341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.513491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.513642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.513667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.513841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.514018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.514047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.514224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.514394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.577 [2024-05-12 07:06:54.514419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.577 qpair failed and we were unable to recover it. 00:26:47.577 [2024-05-12 07:06:54.514674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.514867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.514892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.515076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.515247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.515271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.515445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.515587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.515611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.515782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.515943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.515968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.516134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.516281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.516306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.516497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.516689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.516721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.516935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.517103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.517128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.517300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.517447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.517472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.517654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.517797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.517823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.517979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.518181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.518206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.518347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.518537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.518561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.518709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.518856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.518882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.519071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.519211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.519236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.519391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.519535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.519560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.519724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.519918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.519943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.520134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.520285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.520311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.520474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.520621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.520648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.520823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.520960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.520985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.521151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.521343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.521369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.521563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.521742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.521767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.521925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.522108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.522133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.522303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.522445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.522470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.522636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.522825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.522850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.523009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.523182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.523206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.523349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.523492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.523516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.523710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.523877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.523902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.524082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.524230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.524255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.524400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.524572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.524597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.524804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.524951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.524975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.578 [2024-05-12 07:06:54.525150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.525307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.578 [2024-05-12 07:06:54.525332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.578 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.525518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.525692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.525721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.525901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.526046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.526071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.526220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.526388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.526413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.526587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.526776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.526800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.526980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.527159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.527184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.527359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.527526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.527550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.527706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.527877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.527903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.528084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.528269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.528293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.528467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.528642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.528666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.528828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.529004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.529030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.529182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.529359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.529383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.529559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.529718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.529743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.529939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.530088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.530114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.530282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.530456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.530481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.530624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.530826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.530850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.530991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.531153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.531178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.531356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.531511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.531536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.531685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.531840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.531864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.532038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.532209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.532234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.532409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.532550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.532577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.532718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.532897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.532921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.533094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.533267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.533291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.533487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.533639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.533664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.533842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.533984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.534008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.534158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.534302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.534327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.534503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.534652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.534676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.534840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.535009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.535034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.535205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.535376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.535401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.535581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.535737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.535762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.579 [2024-05-12 07:06:54.535906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.536049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.579 [2024-05-12 07:06:54.536074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.579 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.536230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.536372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.536397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.536542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.536708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.536732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.536871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.537018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.537043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.537217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.537362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.537386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.537564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.537705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.537730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.537901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.538092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.538117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.538307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.538479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.538503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.538657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.538834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.538859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.539005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.539180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.539204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.539352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.539556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.539582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.539726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.539872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.539897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.540064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.540238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.540262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.540403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.540552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.540577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.540749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.540926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.540950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.541117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.541292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.541317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.541466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.541655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.541679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.541857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.542023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.542047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.542204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.542351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.542376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.542556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.542760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.542786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.542959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.543105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.543129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.543304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.543473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.543498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.543660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.543813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.543838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.544002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.544153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.544179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.544378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.544548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.544572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.544720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.544891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.544915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.545092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.545294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.545319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.545490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.545639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.545663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.545847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.545994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.546019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.546182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.546355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.546380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.580 qpair failed and we were unable to recover it. 00:26:47.580 [2024-05-12 07:06:54.546553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.580 [2024-05-12 07:06:54.546698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.546723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.546890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.547047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.547072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.547245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.547417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.547442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.547605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.547754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.547779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.547956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.548106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.548132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.548279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.548449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.548473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.548621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.548800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.548827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.548976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.549157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.549182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.549326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.549467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.549492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.549633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.549810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.549836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.549990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.550166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.550191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.550353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.550513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.550544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.550693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.550842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.550865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.551017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.551169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.551193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.551336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.551510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.551535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.551693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.551899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.551923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.552086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.552280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.552304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.552474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.552640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.552664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.552824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.552968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.552993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.553152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.553295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.553320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.553501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.553672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.553714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.553863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.554013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.554038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.554193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.554365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.554390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.554570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.554740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.554766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.554942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.555090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.555114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.581 [2024-05-12 07:06:54.555265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.555437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.581 [2024-05-12 07:06:54.555462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.581 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.555625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.555809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.555834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.555975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.556124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.556148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.556291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.556436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.556461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.556603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.556746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.556771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.556928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.557102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.557126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.557306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.557481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.557506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.557657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.557831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.557856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.558006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.558196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.558221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.558389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.558553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.558576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.558753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.558931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.558956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.559133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.559281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.559305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.559482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.559622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.559646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.559790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.559939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.559964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.560107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.560273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.560297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.560477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.560628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.560653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.560826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.560972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.560997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.561152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.561292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.561318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.561456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.561597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.561622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.561772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.561922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.561947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.562124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.562266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.562290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.562442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.562582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.562606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.562776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.562949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.562973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.563142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.563281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.563306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.563481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.563651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.563675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.563878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.564035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.564060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.564204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.564379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.564404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.564574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.564738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.564764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.564913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.565079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.565104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.565280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.565423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.565448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.582 qpair failed and we were unable to recover it. 00:26:47.582 [2024-05-12 07:06:54.565595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.582 [2024-05-12 07:06:54.565774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.565799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.565956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.566109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.566132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.566305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.566461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.566486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.566629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.566802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.566828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.567032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.567223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.567248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.567412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.567592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.567617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.567771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.567916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.567940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.568122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.568296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.568324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.568501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.568672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.568703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.568873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.569049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.569074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.569248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.569419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.569444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.569649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.569825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.569850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.570026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.570171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.570196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.570350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.570524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.570548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.570700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.570876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.570900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.571040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.571190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.571214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.571409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.571553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.571577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.571778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.571935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.571963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.572135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.572307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.572332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.572507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.572654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.572678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.572866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.573021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.573045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.573191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.573338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.573363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.573544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.573682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.573719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.573879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.574049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.574073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.574214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.574359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.574384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.574594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.574741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.574765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.574946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.575122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.575146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.575315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.575462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.575487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.575652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.575820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.575844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.583 qpair failed and we were unable to recover it. 00:26:47.583 [2024-05-12 07:06:54.576024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.576204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.583 [2024-05-12 07:06:54.576229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.576373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.576547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.576571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.576751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.576929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.576954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.577116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.577267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.577294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.577445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.577623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.577648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.577818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.577976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.578003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.578180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.578351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.578375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.578520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.578698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.578723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.578873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.579016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.579040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.579213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.579362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.579387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.579564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.579718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.579744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.579886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.580048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.580072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.580223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.580444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.580468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.580673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.580835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.580859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.581007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.581182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.581206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.581358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.581551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.581576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.581752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.581903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.581927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.582099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.582276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.582300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.582480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.582650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.582674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.582881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.583042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.583068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.583216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.583416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.583440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.583609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.583778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.583803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.583980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.584130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.584154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.584305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.584485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.584510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.584664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.584814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.584839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.585008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.585153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.585177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.585348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.585500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.585525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.585700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.585870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.585894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 [2024-05-12 07:06:54.586045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 07:06:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:47.584 07:06:54 -- common/autotest_common.sh@852 -- # return 0 00:26:47.584 [2024-05-12 07:06:54.586194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 [2024-05-12 07:06:54.586219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.584 qpair failed and we were unable to recover it. 00:26:47.584 07:06:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:47.584 [2024-05-12 07:06:54.586375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 07:06:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:47.584 [2024-05-12 07:06:54.586549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.584 07:06:54 -- common/autotest_common.sh@10 -- # set +x 00:26:47.585 [2024-05-12 07:06:54.586574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.586768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.586923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.586947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.587113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.587261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.587286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.587437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.587612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.587638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.587790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.587936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.587959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.588139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.588284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.588310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.588456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.588619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.588644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.588798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.588973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.588998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.589160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.589308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.589333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.589476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.589621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.589646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.589815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.589992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.590017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.590232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.590401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.590425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.590599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.590774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.590800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.590978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.591138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.591163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.591362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.591567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.591593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.591783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.591946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.591970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.592172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.592349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.592374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.592542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.592681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.592709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.592890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.593044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.593068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.593242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.593399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.593424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.593607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.593769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.593794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.593969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.594119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.594144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.594324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.594511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.594537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.594719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.594892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.594918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.595067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.595223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.595248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.595429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.595603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.595626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.595791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.595932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.595957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.596129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.596276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.596301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.596440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.596611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.596636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.596790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.596937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.596961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.585 qpair failed and we were unable to recover it. 00:26:47.585 [2024-05-12 07:06:54.597165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.585 [2024-05-12 07:06:54.597353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.597378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.597535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.597751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.597778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.597929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.598086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.598111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.598304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.598496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.598522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.598703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.598866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.598891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.599079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.599256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.599280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.599453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.599626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.599651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.599840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.600018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.600042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.600189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.600338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.600364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.600536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.600677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.600719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.600872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.601031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.601055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.601202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.601387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.601419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.601613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.601759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.601784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.601937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.602102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.602128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.602282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.602443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.602467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.602634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.602818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.602843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.602995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.603168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.603193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.603351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.603556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.603581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.603755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.603921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.603946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 07:06:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.586 [2024-05-12 07:06:54.604105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 07:06:54 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:47.586 07:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:47.586 [2024-05-12 07:06:54.604257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.604283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 07:06:54 -- common/autotest_common.sh@10 -- # set +x 00:26:47.586 [2024-05-12 07:06:54.604453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.604630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.604655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.604861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.605025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.605050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.605243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.605403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.605427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.605605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.605764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.605790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.586 [2024-05-12 07:06:54.605964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.606109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.586 [2024-05-12 07:06:54.606134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.586 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.606276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.606422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.606447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.606648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.606827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.606852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.607027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.607288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.607313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.607461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.607607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.607632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.607839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.608017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.608043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.608192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.608370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.608395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.608574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.608741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.608766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.608944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.609136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.609160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.609340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.609490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.609515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.609673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.609829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.609854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.610005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.610149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.610173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.610324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.610497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.610522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.610710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.610914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.610939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.611105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.611280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.611305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.611458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.611599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.611624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.611800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.611944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.611969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.612125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.612305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.612331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.612488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.612631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.612657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.612837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.612977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.613006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.613166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.613322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.613346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.613509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.613665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.613690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.613874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.614024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.614049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.614233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.614382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.614407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.614587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.614732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.614757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.614906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.615061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.615085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.615266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.615443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.615467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.615614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.615795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.615819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.615996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.616160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.616184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.616340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.616522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.616546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.587 qpair failed and we were unable to recover it. 00:26:47.587 [2024-05-12 07:06:54.616691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.616841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.587 [2024-05-12 07:06:54.616866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.617017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.617211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.617236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.617382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.617571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.617596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.617809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.617960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.617985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.618165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.618352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.618377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.618580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.618732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.618757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.618919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.619108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.619136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.619291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.619582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.619607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.619756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.619909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.619933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.620090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.620234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.620259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.620526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.620711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.620752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.620908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.621069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.621093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.621270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.621446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.621470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.621621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.621786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.621811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.621981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.622131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.622155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.622306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.622488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.622512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.622678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.622862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.622887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.623091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.623256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.623281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.623448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.623596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.623620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.623798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.623976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.624005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.624148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.624322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.624346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.624498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.624666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.624706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.624862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.625015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.625039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.625185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.625330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.625354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.625500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.625680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.625714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.625871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.626021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.626047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 Malloc0 00:26:47.588 [2024-05-12 07:06:54.626226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.626413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.626439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 07:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:47.588 [2024-05-12 07:06:54.626645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 07:06:54 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:47.588 [2024-05-12 07:06:54.626838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 07:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:47.588 [2024-05-12 07:06:54.626863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 07:06:54 -- common/autotest_common.sh@10 -- # set +x 00:26:47.588 [2024-05-12 07:06:54.627015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.627161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.627185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.588 qpair failed and we were unable to recover it. 00:26:47.588 [2024-05-12 07:06:54.627346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.588 [2024-05-12 07:06:54.627524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.627549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.627706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.627852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.627877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.628052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.628203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.628227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.628379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.628520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.628544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.628701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.628880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.628906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.629052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.629222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.629246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.629398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.629588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.629613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.629791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.629957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.629986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.630024] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.589 [2024-05-12 07:06:54.630171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.630324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.630349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.630503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.630658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.630684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.630848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.631002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.631026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.631176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.631328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.631353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.631501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.631648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.631672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.631839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.631990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.632013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.632242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.632421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.632446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.632617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.632786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.632810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.632962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.633114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.633138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.633278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.633458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.633483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.633636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.633796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.633821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.633977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.634124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.634148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.634295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.634471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.634496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.634668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.634835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.634859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.635068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.635208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.635233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.635386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.635534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.635560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.635719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.635862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.635886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.636037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.636179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.636204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.636384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.636546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.636570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.636736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.636900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.636930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.637090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.637230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.637255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.637414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.637588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.589 [2024-05-12 07:06:54.637612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.589 qpair failed and we were unable to recover it. 00:26:47.589 [2024-05-12 07:06:54.637803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.637973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.637998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.638178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 07:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:47.590 07:06:54 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:47.590 [2024-05-12 07:06:54.638323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.638347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 07:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:47.590 [2024-05-12 07:06:54.638489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 07:06:54 -- common/autotest_common.sh@10 -- # set +x 00:26:47.590 [2024-05-12 07:06:54.638657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.638691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.638851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.639006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.639031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.639200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.639353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.639377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.639552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.639739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.639765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.639909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.640048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.640072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.640242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.640398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.640421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.640626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.640809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.640834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.640990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.641174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.641198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.641341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.641490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.641513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.641660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.641834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.641859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.642058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.642213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.642237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.642382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.642533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.642557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.642738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.642890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.642915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.643069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.643271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.643295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.643449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.643624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.643649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.643807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.643985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.644017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.644169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.644322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.644347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.644485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.644655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.644679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.644837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.644986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.645017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.645204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.645384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.645409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.645583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.645749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.645774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 [2024-05-12 07:06:54.645953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.646125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 [2024-05-12 07:06:54.646149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.590 qpair failed and we were unable to recover it. 00:26:47.590 07:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:47.590 [2024-05-12 07:06:54.646318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.590 07:06:54 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:47.590 07:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:47.591 [2024-05-12 07:06:54.646483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.646508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 07:06:54 -- common/autotest_common.sh@10 -- # set +x 00:26:47.591 [2024-05-12 07:06:54.646686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.646839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.646864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.647020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.647189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.647218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.647391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.647583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.647607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.647772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.647922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.647949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.648102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.648251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.648276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.648423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.648570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.648594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.648770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.648922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.648948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.649102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.649290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.649315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.649502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.649640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.649665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.649817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.649970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.649995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.650177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.650341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.650366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.650537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.650721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.650747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.650904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.651053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.651077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.651259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.651404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.651429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.651583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.651734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.651760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.651913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.652112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.652137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.652311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.652463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.652488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.652642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.652810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.652836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.652991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.653246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.653271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.653422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.653566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.653591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.653745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.654004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.654029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.654203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 07:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:47.591 07:06:54 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.591 [2024-05-12 07:06:54.654378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.654407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 07:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:47.591 [2024-05-12 07:06:54.654584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 07:06:54 -- common/autotest_common.sh@10 -- # set +x 00:26:47.591 [2024-05-12 07:06:54.654773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.654799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.654969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.655139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.655163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.655320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.655470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.655495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.655641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.655821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.655846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.655988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.656139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.656164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.591 [2024-05-12 07:06:54.656336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.656483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.591 [2024-05-12 07:06:54.656509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.591 qpair failed and we were unable to recover it. 00:26:47.592 [2024-05-12 07:06:54.656658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.592 [2024-05-12 07:06:54.656839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.592 [2024-05-12 07:06:54.656866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.592 qpair failed and we were unable to recover it. 00:26:47.592 [2024-05-12 07:06:54.657044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.592 [2024-05-12 07:06:54.657205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.592 [2024-05-12 07:06:54.657231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.592 qpair failed and we were unable to recover it. 00:26:47.592 [2024-05-12 07:06:54.657412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.592 [2024-05-12 07:06:54.657605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.592 [2024-05-12 07:06:54.657631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.592 qpair failed and we were unable to recover it. 00:26:47.592 [2024-05-12 07:06:54.657808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.592 [2024-05-12 07:06:54.657960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.592 [2024-05-12 07:06:54.657991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16599f0 with addr=10.0.0.2, port=4420 00:26:47.592 qpair failed and we were unable to recover it. 00:26:47.592 [2024-05-12 07:06:54.658167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.592 [2024-05-12 07:06:54.658224] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.592 [2024-05-12 07:06:54.661086] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:26:47.592 [2024-05-12 07:06:54.661150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16599f0 (107): Transport endpoint is not connected 00:26:47.592 [2024-05-12 07:06:54.661214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.592 qpair failed and we were unable to recover it. 00:26:47.592 07:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:47.592 07:06:54 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:47.592 07:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:47.592 07:06:54 -- common/autotest_common.sh@10 -- # set +x 00:26:47.592 07:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:47.592 07:06:54 -- host/target_disconnect.sh@58 -- # wait 3147372 00:26:47.592 [2024-05-12 07:06:54.670708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.592 [2024-05-12 07:06:54.670887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.592 [2024-05-12 07:06:54.670915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.592 [2024-05-12 07:06:54.670931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.592 [2024-05-12 07:06:54.670945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.592 [2024-05-12 07:06:54.670973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.592 qpair failed and we were unable to recover it. 00:26:47.852 [2024-05-12 07:06:54.680757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.852 [2024-05-12 07:06:54.680922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.852 [2024-05-12 07:06:54.680949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.852 [2024-05-12 07:06:54.680964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.852 [2024-05-12 07:06:54.680977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.852 [2024-05-12 07:06:54.681005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-05-12 07:06:54.690633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.852 [2024-05-12 07:06:54.690820] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.852 [2024-05-12 07:06:54.690848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.852 [2024-05-12 07:06:54.690867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.852 [2024-05-12 07:06:54.690880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.852 [2024-05-12 07:06:54.690909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-05-12 07:06:54.700745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.852 [2024-05-12 07:06:54.700917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.852 [2024-05-12 07:06:54.700944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.852 [2024-05-12 07:06:54.700958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.852 [2024-05-12 07:06:54.700971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.852 [2024-05-12 07:06:54.701006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-05-12 07:06:54.710713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.852 [2024-05-12 07:06:54.710870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.852 [2024-05-12 07:06:54.710896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.852 [2024-05-12 07:06:54.710911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.852 [2024-05-12 07:06:54.710924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.852 [2024-05-12 07:06:54.710952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-05-12 07:06:54.720673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.852 [2024-05-12 07:06:54.720831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.852 [2024-05-12 07:06:54.720857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.852 [2024-05-12 07:06:54.720871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.852 [2024-05-12 07:06:54.720884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.852 [2024-05-12 07:06:54.720911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-05-12 07:06:54.730735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.852 [2024-05-12 07:06:54.730896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.852 [2024-05-12 07:06:54.730921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.852 [2024-05-12 07:06:54.730935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.852 [2024-05-12 07:06:54.730948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.852 [2024-05-12 07:06:54.730976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-05-12 07:06:54.740767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.852 [2024-05-12 07:06:54.740950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.852 [2024-05-12 07:06:54.740976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.852 [2024-05-12 07:06:54.740996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.852 [2024-05-12 07:06:54.741009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.852 [2024-05-12 07:06:54.741037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-05-12 07:06:54.750785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.852 [2024-05-12 07:06:54.750976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.852 [2024-05-12 07:06:54.751001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.852 [2024-05-12 07:06:54.751016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.852 [2024-05-12 07:06:54.751029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.852 [2024-05-12 07:06:54.751056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-05-12 07:06:54.760819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.852 [2024-05-12 07:06:54.760977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.852 [2024-05-12 07:06:54.761006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.852 [2024-05-12 07:06:54.761021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.852 [2024-05-12 07:06:54.761034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.852 [2024-05-12 07:06:54.761062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-05-12 07:06:54.770862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.852 [2024-05-12 07:06:54.771061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.852 [2024-05-12 07:06:54.771086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.852 [2024-05-12 07:06:54.771100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.852 [2024-05-12 07:06:54.771112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.771140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.780858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.781015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.781041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.781056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.781069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.781097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.790913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.791070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.791097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.791115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.791128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.791157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.800956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.801108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.801134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.801149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.801162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.801189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.810925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.811092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.811117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.811132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.811144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.811172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.821018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.821221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.821246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.821260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.821272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.821299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.831105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.831284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.831308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.831328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.831341] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.831368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.841045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.841192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.841217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.841231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.841244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.841270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.851043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.851196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.851221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.851235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.851247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.851274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.861269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.861437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.861462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.861476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.861488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.861515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.871204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.871351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.871377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.871391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.871403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.871431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.881251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.881444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.881469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.881484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.881496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.881524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.891211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.891367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.891392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.891406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.891418] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.891446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.901202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.901404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.901430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.901444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.853 [2024-05-12 07:06:54.901457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.853 [2024-05-12 07:06:54.901484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-05-12 07:06:54.911280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.853 [2024-05-12 07:06:54.911439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.853 [2024-05-12 07:06:54.911466] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.853 [2024-05-12 07:06:54.911483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.854 [2024-05-12 07:06:54.911497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.854 [2024-05-12 07:06:54.911525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-05-12 07:06:54.921289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.854 [2024-05-12 07:06:54.921444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.854 [2024-05-12 07:06:54.921470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.854 [2024-05-12 07:06:54.921489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.854 [2024-05-12 07:06:54.921503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.854 [2024-05-12 07:06:54.921532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-05-12 07:06:54.931259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.854 [2024-05-12 07:06:54.931424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.854 [2024-05-12 07:06:54.931449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.854 [2024-05-12 07:06:54.931464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.854 [2024-05-12 07:06:54.931476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.854 [2024-05-12 07:06:54.931503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-05-12 07:06:54.941349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.854 [2024-05-12 07:06:54.941513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.854 [2024-05-12 07:06:54.941538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.854 [2024-05-12 07:06:54.941553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.854 [2024-05-12 07:06:54.941565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.854 [2024-05-12 07:06:54.941592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-05-12 07:06:54.951353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.854 [2024-05-12 07:06:54.951503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.854 [2024-05-12 07:06:54.951527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.854 [2024-05-12 07:06:54.951542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.854 [2024-05-12 07:06:54.951554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.854 [2024-05-12 07:06:54.951581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-05-12 07:06:54.961386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.854 [2024-05-12 07:06:54.961538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.854 [2024-05-12 07:06:54.961563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.854 [2024-05-12 07:06:54.961577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.854 [2024-05-12 07:06:54.961589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.854 [2024-05-12 07:06:54.961616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-05-12 07:06:54.971414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.854 [2024-05-12 07:06:54.971580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.854 [2024-05-12 07:06:54.971604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.854 [2024-05-12 07:06:54.971618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.854 [2024-05-12 07:06:54.971630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:47.854 [2024-05-12 07:06:54.971657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.854 qpair failed and we were unable to recover it. 00:26:48.114 [2024-05-12 07:06:54.981467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.114 [2024-05-12 07:06:54.981612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.114 [2024-05-12 07:06:54.981637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.114 [2024-05-12 07:06:54.981651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.114 [2024-05-12 07:06:54.981663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.114 [2024-05-12 07:06:54.981690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.114 qpair failed and we were unable to recover it. 00:26:48.114 [2024-05-12 07:06:54.991494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.114 [2024-05-12 07:06:54.991658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.114 [2024-05-12 07:06:54.991684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:54.991705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:54.991719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:54.991746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.001542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.001725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.001751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.001766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.001778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.001805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.011562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.011719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.011750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.011766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.011778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.011806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.021649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.021819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.021844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.021858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.021871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.021898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.031587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.031742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.031767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.031782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.031794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.031821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.041637] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.041814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.041839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.041853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.041866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.041892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.051659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.051832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.051857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.051871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.051884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.051911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.061700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.061856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.061881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.061895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.061908] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.061935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.071721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.071874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.071899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.071914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.071926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.071953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.081764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.081921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.081946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.081960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.081972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.082000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.091792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.091948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.091973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.091987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.091999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.092026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.101851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.102000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.102030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.102045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.102058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.102086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.111881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.112076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.112101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.112116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.112128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.112155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.121881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.122035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.115 [2024-05-12 07:06:55.122060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.115 [2024-05-12 07:06:55.122074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.115 [2024-05-12 07:06:55.122086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.115 [2024-05-12 07:06:55.122113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.115 qpair failed and we were unable to recover it. 00:26:48.115 [2024-05-12 07:06:55.131914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.115 [2024-05-12 07:06:55.132074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.116 [2024-05-12 07:06:55.132099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.116 [2024-05-12 07:06:55.132113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.116 [2024-05-12 07:06:55.132125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.116 [2024-05-12 07:06:55.132152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.116 qpair failed and we were unable to recover it. 00:26:48.116 [2024-05-12 07:06:55.141919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.116 [2024-05-12 07:06:55.142087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.116 [2024-05-12 07:06:55.142112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.116 [2024-05-12 07:06:55.142126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.116 [2024-05-12 07:06:55.142139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.116 [2024-05-12 07:06:55.142171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.116 qpair failed and we were unable to recover it. 00:26:48.116 [2024-05-12 07:06:55.151955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.116 [2024-05-12 07:06:55.152110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.116 [2024-05-12 07:06:55.152135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.116 [2024-05-12 07:06:55.152149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.116 [2024-05-12 07:06:55.152162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.116 [2024-05-12 07:06:55.152188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.116 qpair failed and we were unable to recover it. 00:26:48.116 [2024-05-12 07:06:55.162023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.116 [2024-05-12 07:06:55.162181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.116 [2024-05-12 07:06:55.162215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.116 [2024-05-12 07:06:55.162229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.116 [2024-05-12 07:06:55.162242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.116 [2024-05-12 07:06:55.162269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.116 qpair failed and we were unable to recover it. 00:26:48.116 [2024-05-12 07:06:55.172032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.116 [2024-05-12 07:06:55.172186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.116 [2024-05-12 07:06:55.172211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.116 [2024-05-12 07:06:55.172226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.116 [2024-05-12 07:06:55.172238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.116 [2024-05-12 07:06:55.172264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.116 qpair failed and we were unable to recover it. 00:26:48.116 [2024-05-12 07:06:55.182044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.116 [2024-05-12 07:06:55.182195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.116 [2024-05-12 07:06:55.182220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.116 [2024-05-12 07:06:55.182235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.116 [2024-05-12 07:06:55.182247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.116 [2024-05-12 07:06:55.182273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.116 qpair failed and we were unable to recover it. 00:26:48.116 [2024-05-12 07:06:55.192142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.116 [2024-05-12 07:06:55.192306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.116 [2024-05-12 07:06:55.192340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.116 [2024-05-12 07:06:55.192363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.116 [2024-05-12 07:06:55.192376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.116 [2024-05-12 07:06:55.192403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.116 qpair failed and we were unable to recover it. 00:26:48.116 [2024-05-12 07:06:55.202120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.116 [2024-05-12 07:06:55.202284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.116 [2024-05-12 07:06:55.202309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.116 [2024-05-12 07:06:55.202323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.116 [2024-05-12 07:06:55.202335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.116 [2024-05-12 07:06:55.202362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.116 qpair failed and we were unable to recover it. 00:26:48.116 [2024-05-12 07:06:55.212144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.116 [2024-05-12 07:06:55.212337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.116 [2024-05-12 07:06:55.212362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.116 [2024-05-12 07:06:55.212376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.116 [2024-05-12 07:06:55.212388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.116 [2024-05-12 07:06:55.212415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.116 qpair failed and we were unable to recover it. 00:26:48.116 [2024-05-12 07:06:55.222187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.116 [2024-05-12 07:06:55.222342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.116 [2024-05-12 07:06:55.222367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.116 [2024-05-12 07:06:55.222381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.116 [2024-05-12 07:06:55.222394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.116 [2024-05-12 07:06:55.222421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.116 qpair failed and we were unable to recover it. 00:26:48.116 [2024-05-12 07:06:55.232234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.116 [2024-05-12 07:06:55.232391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.116 [2024-05-12 07:06:55.232416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.116 [2024-05-12 07:06:55.232430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.116 [2024-05-12 07:06:55.232443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.116 [2024-05-12 07:06:55.232475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.116 qpair failed and we were unable to recover it. 00:26:48.376 [2024-05-12 07:06:55.242259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.376 [2024-05-12 07:06:55.242406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.376 [2024-05-12 07:06:55.242432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.376 [2024-05-12 07:06:55.242446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.376 [2024-05-12 07:06:55.242458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.376 [2024-05-12 07:06:55.242486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.376 qpair failed and we were unable to recover it. 00:26:48.376 [2024-05-12 07:06:55.252276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.376 [2024-05-12 07:06:55.252427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.376 [2024-05-12 07:06:55.252452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.376 [2024-05-12 07:06:55.252466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.376 [2024-05-12 07:06:55.252478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.376 [2024-05-12 07:06:55.252505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.376 qpair failed and we were unable to recover it. 00:26:48.376 [2024-05-12 07:06:55.262296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.376 [2024-05-12 07:06:55.262447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.376 [2024-05-12 07:06:55.262471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.376 [2024-05-12 07:06:55.262486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.376 [2024-05-12 07:06:55.262498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.376 [2024-05-12 07:06:55.262525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.376 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.272307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.272454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.272479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.272493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.272505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.272532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.282351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.282500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.282530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.282545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.282558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.282585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.292522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.292704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.292729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.292743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.292756] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.292783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.302425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.302578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.302604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.302618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.302631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.302658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.312462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.312611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.312637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.312651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.312663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.312691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.322543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.322702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.322727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.322742] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.322754] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.322788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.332529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.332682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.332715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.332730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.332743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.332771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.342527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.342694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.342726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.342740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.342752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.342780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.352543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.352714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.352739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.352753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.352765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.352792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.362586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.362765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.362796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.362810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.362823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.362850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.372674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.372839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.372869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.372885] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.372897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.372925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.382629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.382793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.382818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.382833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.382846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.382873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.392647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.392802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.392827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.392841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.392854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.377 [2024-05-12 07:06:55.392881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.377 qpair failed and we were unable to recover it. 00:26:48.377 [2024-05-12 07:06:55.402728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.377 [2024-05-12 07:06:55.402883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.377 [2024-05-12 07:06:55.402909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.377 [2024-05-12 07:06:55.402923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.377 [2024-05-12 07:06:55.402936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.378 [2024-05-12 07:06:55.402964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.378 qpair failed and we were unable to recover it. 00:26:48.378 [2024-05-12 07:06:55.412727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.378 [2024-05-12 07:06:55.412883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.378 [2024-05-12 07:06:55.412908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.378 [2024-05-12 07:06:55.412923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.378 [2024-05-12 07:06:55.412935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.378 [2024-05-12 07:06:55.412967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.378 qpair failed and we were unable to recover it. 00:26:48.378 [2024-05-12 07:06:55.422745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.378 [2024-05-12 07:06:55.422919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.378 [2024-05-12 07:06:55.422944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.378 [2024-05-12 07:06:55.422958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.378 [2024-05-12 07:06:55.422971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.378 [2024-05-12 07:06:55.422998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.378 qpair failed and we were unable to recover it. 00:26:48.378 [2024-05-12 07:06:55.432776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.378 [2024-05-12 07:06:55.432939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.378 [2024-05-12 07:06:55.432963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.378 [2024-05-12 07:06:55.432978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.378 [2024-05-12 07:06:55.432990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.378 [2024-05-12 07:06:55.433017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.378 qpair failed and we were unable to recover it. 00:26:48.378 [2024-05-12 07:06:55.442920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.378 [2024-05-12 07:06:55.443101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.378 [2024-05-12 07:06:55.443126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.378 [2024-05-12 07:06:55.443140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.378 [2024-05-12 07:06:55.443152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.378 [2024-05-12 07:06:55.443179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.378 qpair failed and we were unable to recover it. 00:26:48.378 [2024-05-12 07:06:55.452877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.378 [2024-05-12 07:06:55.453033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.378 [2024-05-12 07:06:55.453058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.378 [2024-05-12 07:06:55.453072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.378 [2024-05-12 07:06:55.453085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.378 [2024-05-12 07:06:55.453112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.378 qpair failed and we were unable to recover it. 00:26:48.378 [2024-05-12 07:06:55.462894] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.378 [2024-05-12 07:06:55.463084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.378 [2024-05-12 07:06:55.463116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.378 [2024-05-12 07:06:55.463133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.378 [2024-05-12 07:06:55.463148] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.378 [2024-05-12 07:06:55.463177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.378 qpair failed and we were unable to recover it. 00:26:48.378 [2024-05-12 07:06:55.472928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.378 [2024-05-12 07:06:55.473093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.378 [2024-05-12 07:06:55.473119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.378 [2024-05-12 07:06:55.473133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.378 [2024-05-12 07:06:55.473145] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.378 [2024-05-12 07:06:55.473173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.378 qpair failed and we were unable to recover it. 00:26:48.378 [2024-05-12 07:06:55.482954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.378 [2024-05-12 07:06:55.483107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.378 [2024-05-12 07:06:55.483132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.378 [2024-05-12 07:06:55.483146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.378 [2024-05-12 07:06:55.483158] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.378 [2024-05-12 07:06:55.483185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.378 qpair failed and we were unable to recover it. 00:26:48.378 [2024-05-12 07:06:55.492959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.378 [2024-05-12 07:06:55.493116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.378 [2024-05-12 07:06:55.493141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.378 [2024-05-12 07:06:55.493155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.378 [2024-05-12 07:06:55.493168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.378 [2024-05-12 07:06:55.493195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.378 qpair failed and we were unable to recover it. 00:26:48.378 [2024-05-12 07:06:55.502975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.378 [2024-05-12 07:06:55.503131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.378 [2024-05-12 07:06:55.503156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.378 [2024-05-12 07:06:55.503171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.378 [2024-05-12 07:06:55.503189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.378 [2024-05-12 07:06:55.503217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.378 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.513021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.513172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.513197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.513211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.513223] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.640 [2024-05-12 07:06:55.513251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.640 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.523027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.523176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.523201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.523216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.523228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.640 [2024-05-12 07:06:55.523256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.640 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.533101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.533257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.533282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.533297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.533310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.640 [2024-05-12 07:06:55.533337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.640 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.543130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.543327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.543352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.543366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.543379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.640 [2024-05-12 07:06:55.543406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.640 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.553121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.553284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.553310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.553324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.553337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.640 [2024-05-12 07:06:55.553364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.640 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.563169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.563363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.563388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.563403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.563415] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.640 [2024-05-12 07:06:55.563442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.640 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.573218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.573369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.573394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.573408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.573421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.640 [2024-05-12 07:06:55.573448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.640 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.583253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.583409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.583434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.583448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.583461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.640 [2024-05-12 07:06:55.583488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.640 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.593231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.593385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.593411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.593425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.593443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.640 [2024-05-12 07:06:55.593471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.640 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.603276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.603429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.603454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.603468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.603481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.640 [2024-05-12 07:06:55.603508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.640 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.613386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.613536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.613562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.613576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.613588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.640 [2024-05-12 07:06:55.613616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.640 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.623311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.623464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.623489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.623504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.623516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.640 [2024-05-12 07:06:55.623543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.640 qpair failed and we were unable to recover it. 00:26:48.640 [2024-05-12 07:06:55.633359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.640 [2024-05-12 07:06:55.633509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.640 [2024-05-12 07:06:55.633534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.640 [2024-05-12 07:06:55.633549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.640 [2024-05-12 07:06:55.633561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.633588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.643420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.643618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.643644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.643658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.643671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.643704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.653433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.653587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.653612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.653627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.653639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.653666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.663495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.663687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.663718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.663732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.663745] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.663772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.673439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.673606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.673630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.673645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.673657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.673683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.683577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.683768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.683794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.683808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.683826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.683854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.693530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.693684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.693716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.693731] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.693743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.693770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.703585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.703740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.703765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.703780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.703793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.703820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.713579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.713738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.713764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.713778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.713791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.713819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.723655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.723851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.723877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.723892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.723904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.723932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.733665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.733843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.733868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.733883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.733895] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.733923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.743687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.743877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.743903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.743917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.743930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.743957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.753786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.753941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.753970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.753986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.754002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.754031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.641 [2024-05-12 07:06:55.763752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.641 [2024-05-12 07:06:55.763933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.641 [2024-05-12 07:06:55.763960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.641 [2024-05-12 07:06:55.763979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.641 [2024-05-12 07:06:55.763992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.641 [2024-05-12 07:06:55.764021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.641 qpair failed and we were unable to recover it. 00:26:48.902 [2024-05-12 07:06:55.773761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.902 [2024-05-12 07:06:55.773957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.902 [2024-05-12 07:06:55.773983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.902 [2024-05-12 07:06:55.773998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.902 [2024-05-12 07:06:55.774016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.902 [2024-05-12 07:06:55.774044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.902 qpair failed and we were unable to recover it. 00:26:48.902 [2024-05-12 07:06:55.783789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.902 [2024-05-12 07:06:55.783936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.902 [2024-05-12 07:06:55.783961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.902 [2024-05-12 07:06:55.783976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.902 [2024-05-12 07:06:55.783988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.902 [2024-05-12 07:06:55.784014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.902 qpair failed and we were unable to recover it. 00:26:48.902 [2024-05-12 07:06:55.793855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.902 [2024-05-12 07:06:55.794012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.902 [2024-05-12 07:06:55.794037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.902 [2024-05-12 07:06:55.794051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.902 [2024-05-12 07:06:55.794063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.902 [2024-05-12 07:06:55.794090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.902 qpair failed and we were unable to recover it. 00:26:48.902 [2024-05-12 07:06:55.803878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.902 [2024-05-12 07:06:55.804072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.902 [2024-05-12 07:06:55.804097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.902 [2024-05-12 07:06:55.804111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.902 [2024-05-12 07:06:55.804124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.902 [2024-05-12 07:06:55.804151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.902 qpair failed and we were unable to recover it. 00:26:48.902 [2024-05-12 07:06:55.813959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.902 [2024-05-12 07:06:55.814119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.902 [2024-05-12 07:06:55.814145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.902 [2024-05-12 07:06:55.814159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.902 [2024-05-12 07:06:55.814172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.902 [2024-05-12 07:06:55.814199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.902 qpair failed and we were unable to recover it. 00:26:48.902 [2024-05-12 07:06:55.823957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.902 [2024-05-12 07:06:55.824162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.902 [2024-05-12 07:06:55.824188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.824202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.824214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.824241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.833930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.834084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.834109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.834123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.834136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.834163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.843988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.844142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.844167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.844181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.844193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.844220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.854038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.854190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.854214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.854229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.854241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.854268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.864042] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.864193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.864218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.864239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.864252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.864279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.874066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.874280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.874306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.874321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.874337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.874366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.884060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.884208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.884233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.884247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.884259] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.884287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.894149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.894301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.894326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.894340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.894353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.894380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.904151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.904298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.904324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.904338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.904350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.904378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.914170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.914328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.914353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.914367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.914379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.914406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.924242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.924428] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.924453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.924468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.924480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.924507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.934235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.934386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.934411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.934425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.934437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.934464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.944324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.944478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.944502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.944516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.944528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.944555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.954335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.903 [2024-05-12 07:06:55.954486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.903 [2024-05-12 07:06:55.954511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.903 [2024-05-12 07:06:55.954531] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.903 [2024-05-12 07:06:55.954544] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.903 [2024-05-12 07:06:55.954571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.903 qpair failed and we were unable to recover it. 00:26:48.903 [2024-05-12 07:06:55.964314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.904 [2024-05-12 07:06:55.964457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.904 [2024-05-12 07:06:55.964482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.904 [2024-05-12 07:06:55.964497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.904 [2024-05-12 07:06:55.964509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.904 [2024-05-12 07:06:55.964536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.904 qpair failed and we were unable to recover it. 00:26:48.904 [2024-05-12 07:06:55.974456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.904 [2024-05-12 07:06:55.974613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.904 [2024-05-12 07:06:55.974638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.904 [2024-05-12 07:06:55.974652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.904 [2024-05-12 07:06:55.974664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.904 [2024-05-12 07:06:55.974691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.904 qpair failed and we were unable to recover it. 00:26:48.904 [2024-05-12 07:06:55.984415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.904 [2024-05-12 07:06:55.984569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.904 [2024-05-12 07:06:55.984594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.904 [2024-05-12 07:06:55.984608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.904 [2024-05-12 07:06:55.984620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.904 [2024-05-12 07:06:55.984648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.904 qpair failed and we were unable to recover it. 00:26:48.904 [2024-05-12 07:06:55.994437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.904 [2024-05-12 07:06:55.994613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.904 [2024-05-12 07:06:55.994637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.904 [2024-05-12 07:06:55.994651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.904 [2024-05-12 07:06:55.994664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.904 [2024-05-12 07:06:55.994691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.904 qpair failed and we were unable to recover it. 00:26:48.904 [2024-05-12 07:06:56.004463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.904 [2024-05-12 07:06:56.004606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.904 [2024-05-12 07:06:56.004631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.904 [2024-05-12 07:06:56.004646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.904 [2024-05-12 07:06:56.004658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.904 [2024-05-12 07:06:56.004685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.904 qpair failed and we were unable to recover it. 00:26:48.904 [2024-05-12 07:06:56.014510] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.904 [2024-05-12 07:06:56.014664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.904 [2024-05-12 07:06:56.014690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.904 [2024-05-12 07:06:56.014714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.904 [2024-05-12 07:06:56.014727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.904 [2024-05-12 07:06:56.014754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.904 qpair failed and we were unable to recover it. 00:26:48.904 [2024-05-12 07:06:56.024531] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.904 [2024-05-12 07:06:56.024718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.904 [2024-05-12 07:06:56.024744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.904 [2024-05-12 07:06:56.024758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.904 [2024-05-12 07:06:56.024771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:48.904 [2024-05-12 07:06:56.024799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:48.904 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.034550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.034752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.034778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.164 [2024-05-12 07:06:56.034792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.164 [2024-05-12 07:06:56.034804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.164 [2024-05-12 07:06:56.034832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.164 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.044555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.044717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.044742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.164 [2024-05-12 07:06:56.044764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.164 [2024-05-12 07:06:56.044777] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.164 [2024-05-12 07:06:56.044804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.164 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.054623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.054793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.054818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.164 [2024-05-12 07:06:56.054833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.164 [2024-05-12 07:06:56.054845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.164 [2024-05-12 07:06:56.054873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.164 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.064634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.064792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.064817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.164 [2024-05-12 07:06:56.064831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.164 [2024-05-12 07:06:56.064844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.164 [2024-05-12 07:06:56.064871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.164 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.074641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.074854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.074879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.164 [2024-05-12 07:06:56.074894] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.164 [2024-05-12 07:06:56.074906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.164 [2024-05-12 07:06:56.074933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.164 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.084721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.084881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.084905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.164 [2024-05-12 07:06:56.084920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.164 [2024-05-12 07:06:56.084932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.164 [2024-05-12 07:06:56.084959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.164 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.094724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.094922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.094947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.164 [2024-05-12 07:06:56.094961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.164 [2024-05-12 07:06:56.094974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.164 [2024-05-12 07:06:56.095001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.164 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.104733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.104900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.104925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.164 [2024-05-12 07:06:56.104939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.164 [2024-05-12 07:06:56.104952] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.164 [2024-05-12 07:06:56.104979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.164 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.114787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.114948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.114973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.164 [2024-05-12 07:06:56.114988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.164 [2024-05-12 07:06:56.115000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.164 [2024-05-12 07:06:56.115027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.164 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.124831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.124999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.125026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.164 [2024-05-12 07:06:56.125045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.164 [2024-05-12 07:06:56.125057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.164 [2024-05-12 07:06:56.125086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.164 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.134855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.135038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.135063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.164 [2024-05-12 07:06:56.135083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.164 [2024-05-12 07:06:56.135096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.164 [2024-05-12 07:06:56.135124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.164 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.144950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.145104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.145129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.164 [2024-05-12 07:06:56.145144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.164 [2024-05-12 07:06:56.145156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.164 [2024-05-12 07:06:56.145183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.164 qpair failed and we were unable to recover it. 00:26:49.164 [2024-05-12 07:06:56.154909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.164 [2024-05-12 07:06:56.155094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.164 [2024-05-12 07:06:56.155121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.155136] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.155152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.155181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.164914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.165065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.165091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.165106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.165118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.165146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.174976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.175139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.175165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.175180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.175192] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.175219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.184957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.185133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.185158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.185172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.185185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.185212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.194988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.195134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.195159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.195173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.195185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.195213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.205045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.205197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.205221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.205235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.205247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.205275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.215065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.215232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.215258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.215273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.215286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.215313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.225077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.225258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.225289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.225304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.225317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.225345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.235111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.235260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.235286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.235301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.235314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.235343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.245128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.245276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.245302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.245317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.245330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.245357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.255185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.255338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.255363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.255377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.255390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.255417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.265216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.265368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.265393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.265407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.265419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.265447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.275210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.275370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.275395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.275409] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.275421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.275449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.165 [2024-05-12 07:06:56.285252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.165 [2024-05-12 07:06:56.285425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.165 [2024-05-12 07:06:56.285450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.165 [2024-05-12 07:06:56.285464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.165 [2024-05-12 07:06:56.285477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.165 [2024-05-12 07:06:56.285504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.165 qpair failed and we were unable to recover it. 00:26:49.425 [2024-05-12 07:06:56.295286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.425 [2024-05-12 07:06:56.295439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.425 [2024-05-12 07:06:56.295464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.425 [2024-05-12 07:06:56.295479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.425 [2024-05-12 07:06:56.295491] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.425 [2024-05-12 07:06:56.295518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.425 qpair failed and we were unable to recover it. 00:26:49.425 [2024-05-12 07:06:56.305334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.425 [2024-05-12 07:06:56.305532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.425 [2024-05-12 07:06:56.305558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.425 [2024-05-12 07:06:56.305572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.425 [2024-05-12 07:06:56.305585] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.425 [2024-05-12 07:06:56.305612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.425 qpair failed and we were unable to recover it. 00:26:49.425 [2024-05-12 07:06:56.315388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.425 [2024-05-12 07:06:56.315569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.425 [2024-05-12 07:06:56.315599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.425 [2024-05-12 07:06:56.315614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.425 [2024-05-12 07:06:56.315627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.425 [2024-05-12 07:06:56.315654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.425 qpair failed and we were unable to recover it. 00:26:49.425 [2024-05-12 07:06:56.325375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.425 [2024-05-12 07:06:56.325547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.425 [2024-05-12 07:06:56.325572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.425 [2024-05-12 07:06:56.325587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.425 [2024-05-12 07:06:56.325599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.425 [2024-05-12 07:06:56.325625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.425 qpair failed and we were unable to recover it. 00:26:49.425 [2024-05-12 07:06:56.335397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.425 [2024-05-12 07:06:56.335563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.425 [2024-05-12 07:06:56.335588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.425 [2024-05-12 07:06:56.335602] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.425 [2024-05-12 07:06:56.335615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.425 [2024-05-12 07:06:56.335642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.425 qpair failed and we were unable to recover it. 00:26:49.425 [2024-05-12 07:06:56.345444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.425 [2024-05-12 07:06:56.345599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.425 [2024-05-12 07:06:56.345623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.425 [2024-05-12 07:06:56.345637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.425 [2024-05-12 07:06:56.345650] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.425 [2024-05-12 07:06:56.345677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.425 qpair failed and we were unable to recover it. 00:26:49.425 [2024-05-12 07:06:56.355482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.425 [2024-05-12 07:06:56.355643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.425 [2024-05-12 07:06:56.355667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.425 [2024-05-12 07:06:56.355682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.425 [2024-05-12 07:06:56.355694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.425 [2024-05-12 07:06:56.355736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.425 qpair failed and we were unable to recover it. 00:26:49.425 [2024-05-12 07:06:56.365461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.425 [2024-05-12 07:06:56.365613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.425 [2024-05-12 07:06:56.365638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.425 [2024-05-12 07:06:56.365654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.425 [2024-05-12 07:06:56.365666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.425 [2024-05-12 07:06:56.365693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.425 qpair failed and we were unable to recover it. 00:26:49.425 [2024-05-12 07:06:56.375621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.425 [2024-05-12 07:06:56.375787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.425 [2024-05-12 07:06:56.375812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.425 [2024-05-12 07:06:56.375826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.425 [2024-05-12 07:06:56.375838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.375866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.385543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.385714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.385740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.385754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.385766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.385794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.395553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.395720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.395745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.395759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.395771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.395799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.405572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.405766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.405796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.405811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.405823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.405852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.415609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.415770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.415794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.415808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.415821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.415848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.425641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.425791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.425816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.425830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.425842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.425870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.435662] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.435825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.435850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.435864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.435876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.435904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.445738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.445893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.445918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.445933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.445945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.445977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.455753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.455907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.455932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.455946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.455958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.455985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.465772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.465932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.465958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.465973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.465985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.466012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.475792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.475940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.475965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.475980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.475992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.476019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.485854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.486031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.486056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.486070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.486083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.486110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.495897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.496083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.496114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.496131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.496147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.496175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.505868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.506017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.506042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.506056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.426 [2024-05-12 07:06:56.506069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.426 [2024-05-12 07:06:56.506096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.426 qpair failed and we were unable to recover it. 00:26:49.426 [2024-05-12 07:06:56.515952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.426 [2024-05-12 07:06:56.516108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.426 [2024-05-12 07:06:56.516133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.426 [2024-05-12 07:06:56.516147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.427 [2024-05-12 07:06:56.516160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.427 [2024-05-12 07:06:56.516187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.427 qpair failed and we were unable to recover it. 00:26:49.427 [2024-05-12 07:06:56.525931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.427 [2024-05-12 07:06:56.526088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.427 [2024-05-12 07:06:56.526114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.427 [2024-05-12 07:06:56.526128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.427 [2024-05-12 07:06:56.526140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.427 [2024-05-12 07:06:56.526168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.427 qpair failed and we were unable to recover it. 00:26:49.427 [2024-05-12 07:06:56.536020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.427 [2024-05-12 07:06:56.536208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.427 [2024-05-12 07:06:56.536232] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.427 [2024-05-12 07:06:56.536246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.427 [2024-05-12 07:06:56.536259] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.427 [2024-05-12 07:06:56.536291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.427 qpair failed and we were unable to recover it. 00:26:49.427 [2024-05-12 07:06:56.546030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.427 [2024-05-12 07:06:56.546203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.427 [2024-05-12 07:06:56.546228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.427 [2024-05-12 07:06:56.546243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.427 [2024-05-12 07:06:56.546255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.427 [2024-05-12 07:06:56.546282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.427 qpair failed and we were unable to recover it. 00:26:49.688 [2024-05-12 07:06:56.556025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.688 [2024-05-12 07:06:56.556168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.688 [2024-05-12 07:06:56.556193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.688 [2024-05-12 07:06:56.556207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.688 [2024-05-12 07:06:56.556220] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.688 [2024-05-12 07:06:56.556247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.688 qpair failed and we were unable to recover it. 00:26:49.688 [2024-05-12 07:06:56.566056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.688 [2024-05-12 07:06:56.566202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.688 [2024-05-12 07:06:56.566227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.688 [2024-05-12 07:06:56.566241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.688 [2024-05-12 07:06:56.566254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.688 [2024-05-12 07:06:56.566281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.688 qpair failed and we were unable to recover it. 00:26:49.688 [2024-05-12 07:06:56.576083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.688 [2024-05-12 07:06:56.576242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.688 [2024-05-12 07:06:56.576266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.688 [2024-05-12 07:06:56.576280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.688 [2024-05-12 07:06:56.576293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.688 [2024-05-12 07:06:56.576320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.688 qpair failed and we were unable to recover it. 00:26:49.688 [2024-05-12 07:06:56.586131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.688 [2024-05-12 07:06:56.586284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.688 [2024-05-12 07:06:56.586315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.688 [2024-05-12 07:06:56.586330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.688 [2024-05-12 07:06:56.586343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.688 [2024-05-12 07:06:56.586370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.688 qpair failed and we were unable to recover it. 00:26:49.688 [2024-05-12 07:06:56.596167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.688 [2024-05-12 07:06:56.596318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.688 [2024-05-12 07:06:56.596342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.688 [2024-05-12 07:06:56.596356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.688 [2024-05-12 07:06:56.596369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.688 [2024-05-12 07:06:56.596395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.688 qpair failed and we were unable to recover it. 00:26:49.688 [2024-05-12 07:06:56.606192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.688 [2024-05-12 07:06:56.606343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.688 [2024-05-12 07:06:56.606368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.688 [2024-05-12 07:06:56.606382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.688 [2024-05-12 07:06:56.606394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.688 [2024-05-12 07:06:56.606422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.688 qpair failed and we were unable to recover it. 00:26:49.688 [2024-05-12 07:06:56.616194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.688 [2024-05-12 07:06:56.616396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.688 [2024-05-12 07:06:56.616421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.688 [2024-05-12 07:06:56.616435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.688 [2024-05-12 07:06:56.616448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.688 [2024-05-12 07:06:56.616475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.688 qpair failed and we were unable to recover it. 00:26:49.688 [2024-05-12 07:06:56.626229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.688 [2024-05-12 07:06:56.626382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.688 [2024-05-12 07:06:56.626407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.688 [2024-05-12 07:06:56.626422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.688 [2024-05-12 07:06:56.626439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.688 [2024-05-12 07:06:56.626468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.688 qpair failed and we were unable to recover it. 00:26:49.688 [2024-05-12 07:06:56.636262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.688 [2024-05-12 07:06:56.636419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.688 [2024-05-12 07:06:56.636444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.688 [2024-05-12 07:06:56.636458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.688 [2024-05-12 07:06:56.636471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.688 [2024-05-12 07:06:56.636498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.688 qpair failed and we were unable to recover it. 00:26:49.688 [2024-05-12 07:06:56.646317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.688 [2024-05-12 07:06:56.646473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.688 [2024-05-12 07:06:56.646499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.688 [2024-05-12 07:06:56.646514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.688 [2024-05-12 07:06:56.646526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.688 [2024-05-12 07:06:56.646553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.688 qpair failed and we were unable to recover it. 00:26:49.688 [2024-05-12 07:06:56.656446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.688 [2024-05-12 07:06:56.656618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.688 [2024-05-12 07:06:56.656643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.688 [2024-05-12 07:06:56.656657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.688 [2024-05-12 07:06:56.656669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.688 [2024-05-12 07:06:56.656703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.688 qpair failed and we were unable to recover it. 00:26:49.688 [2024-05-12 07:06:56.666344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.688 [2024-05-12 07:06:56.666495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.666518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.666532] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.666545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.666572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.676419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.676598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.676633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.676652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.676664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.676693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.686414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.686565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.686591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.686606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.686619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.686646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.696444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.696597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.696622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.696636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.696649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.696676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.706459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.706615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.706640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.706655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.706667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.706701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.716496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.716649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.716673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.716687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.716711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.716740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.726562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.726746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.726772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.726786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.726799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.726826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.736587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.736747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.736772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.736787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.736799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.736827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.746621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.746807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.746832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.746847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.746859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.746887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.756634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.756791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.756816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.756831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.756843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.756871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.766665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.766860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.766886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.766905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.766918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.766947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.776710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.776884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.776909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.776924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.776936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.776963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.786738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.786893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.786918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.786932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.786944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.786971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.796753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.689 [2024-05-12 07:06:56.796910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.689 [2024-05-12 07:06:56.796935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.689 [2024-05-12 07:06:56.796949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.689 [2024-05-12 07:06:56.796962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.689 [2024-05-12 07:06:56.796989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.689 qpair failed and we were unable to recover it. 00:26:49.689 [2024-05-12 07:06:56.806773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.690 [2024-05-12 07:06:56.806924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.690 [2024-05-12 07:06:56.806949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.690 [2024-05-12 07:06:56.806963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.690 [2024-05-12 07:06:56.806981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.690 [2024-05-12 07:06:56.807009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.690 qpair failed and we were unable to recover it. 00:26:49.950 [2024-05-12 07:06:56.816779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.950 [2024-05-12 07:06:56.816938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.950 [2024-05-12 07:06:56.816963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.816977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.816989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.817016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.826815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.826958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.826983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.826997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.827010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.827037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.836848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.836997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.837022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.837036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.837048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.837076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.846889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.847033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.847058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.847072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.847085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.847112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.856898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.857064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.857089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.857104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.857116] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.857144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.866922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.867101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.867132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.867146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.867159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.867186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.877041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.877198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.877223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.877238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.877250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.877277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.887029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.887183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.887207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.887222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.887234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.887261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.897081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.897284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.897309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.897323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.897340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.897369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.907103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.907310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.907335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.907349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.907361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.907388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.917095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.917272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.917296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.917310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.917323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.917350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.927173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.927330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.927355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.927369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.927382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.927409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.937149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.937307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.937331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.937345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.937357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.937385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.947146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.951 [2024-05-12 07:06:56.947313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.951 [2024-05-12 07:06:56.947338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.951 [2024-05-12 07:06:56.947353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.951 [2024-05-12 07:06:56.947365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.951 [2024-05-12 07:06:56.947392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.951 qpair failed and we were unable to recover it. 00:26:49.951 [2024-05-12 07:06:56.957165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:56.957319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:56.957344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:56.957358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:56.957371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:56.957397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:49.952 [2024-05-12 07:06:56.967253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:56.967408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:56.967433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:56.967447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:56.967460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:56.967488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:49.952 [2024-05-12 07:06:56.977301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:56.977495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:56.977523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:56.977539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:56.977551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:56.977580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:49.952 [2024-05-12 07:06:56.987271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:56.987429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:56.987455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:56.987474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:56.987488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:56.987515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:49.952 [2024-05-12 07:06:56.997295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:56.997442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:56.997467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:56.997482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:56.997494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:56.997521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:49.952 [2024-05-12 07:06:57.007319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:57.007464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:57.007489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:57.007503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:57.007516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:57.007543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:49.952 [2024-05-12 07:06:57.017366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:57.017524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:57.017549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:57.017563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:57.017576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:57.017603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:49.952 [2024-05-12 07:06:57.027381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:57.027529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:57.027555] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:57.027570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:57.027582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:57.027609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:49.952 [2024-05-12 07:06:57.037472] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:57.037658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:57.037684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:57.037706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:57.037720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:57.037748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:49.952 [2024-05-12 07:06:57.047438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:57.047589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:57.047614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:57.047628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:57.047640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:57.047667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:49.952 [2024-05-12 07:06:57.057523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:57.057681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:57.057712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:57.057727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:57.057739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:57.057767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:49.952 [2024-05-12 07:06:57.067492] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:57.067650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:57.067676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:57.067690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:57.067710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:57.067738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:49.952 [2024-05-12 07:06:57.077544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.952 [2024-05-12 07:06:57.077722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.952 [2024-05-12 07:06:57.077749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.952 [2024-05-12 07:06:57.077772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.952 [2024-05-12 07:06:57.077785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:49.952 [2024-05-12 07:06:57.077813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:49.952 qpair failed and we were unable to recover it. 00:26:50.213 [2024-05-12 07:06:57.087561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.213 [2024-05-12 07:06:57.087740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.213 [2024-05-12 07:06:57.087766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.213 [2024-05-12 07:06:57.087780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.213 [2024-05-12 07:06:57.087792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.213 [2024-05-12 07:06:57.087820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.213 qpair failed and we were unable to recover it. 00:26:50.213 [2024-05-12 07:06:57.097622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.213 [2024-05-12 07:06:57.097794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.213 [2024-05-12 07:06:57.097819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.213 [2024-05-12 07:06:57.097833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.213 [2024-05-12 07:06:57.097846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.213 [2024-05-12 07:06:57.097874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.213 qpair failed and we were unable to recover it. 00:26:50.213 [2024-05-12 07:06:57.107607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.213 [2024-05-12 07:06:57.107774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.213 [2024-05-12 07:06:57.107799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.213 [2024-05-12 07:06:57.107814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.213 [2024-05-12 07:06:57.107826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.213 [2024-05-12 07:06:57.107853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.213 qpair failed and we were unable to recover it. 00:26:50.213 [2024-05-12 07:06:57.117643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.213 [2024-05-12 07:06:57.117803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.213 [2024-05-12 07:06:57.117828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.213 [2024-05-12 07:06:57.117843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.213 [2024-05-12 07:06:57.117855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.213 [2024-05-12 07:06:57.117883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.213 qpair failed and we were unable to recover it. 00:26:50.213 [2024-05-12 07:06:57.127702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.213 [2024-05-12 07:06:57.127849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.213 [2024-05-12 07:06:57.127875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.213 [2024-05-12 07:06:57.127889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.213 [2024-05-12 07:06:57.127901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.213 [2024-05-12 07:06:57.127928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.213 qpair failed and we were unable to recover it. 00:26:50.213 [2024-05-12 07:06:57.137720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.213 [2024-05-12 07:06:57.137877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.213 [2024-05-12 07:06:57.137902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.213 [2024-05-12 07:06:57.137917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.213 [2024-05-12 07:06:57.137929] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.213 [2024-05-12 07:06:57.137956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.213 qpair failed and we were unable to recover it. 00:26:50.213 [2024-05-12 07:06:57.147781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.213 [2024-05-12 07:06:57.147938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.213 [2024-05-12 07:06:57.147964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.213 [2024-05-12 07:06:57.147983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.213 [2024-05-12 07:06:57.147996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.213 [2024-05-12 07:06:57.148024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.213 qpair failed and we were unable to recover it. 00:26:50.213 [2024-05-12 07:06:57.157780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.157951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.157976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.157990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.158003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.158031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.167895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.168045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.168070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.168091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.168104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.168133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.177849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.178001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.178027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.178042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.178054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.178081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.187918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.188125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.188150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.188165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.188177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.188205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.197882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.198036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.198061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.198075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.198088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.198116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.207921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.208078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.208103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.208117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.208129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.208156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.217941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.218098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.218123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.218138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.218150] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.218177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.228022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.228214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.228239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.228254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.228266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.228293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.238034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.238192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.238217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.238231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.238243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.238270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.248064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.248236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.248262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.248276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.248288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.248316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.258099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.258256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.258280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.258300] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.258313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.258341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.268071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.268219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.268244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.268258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.268270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.268297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.278110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.278255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.278280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.278294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.278307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.278334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.214 [2024-05-12 07:06:57.288144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.214 [2024-05-12 07:06:57.288289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.214 [2024-05-12 07:06:57.288313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.214 [2024-05-12 07:06:57.288327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.214 [2024-05-12 07:06:57.288340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.214 [2024-05-12 07:06:57.288367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.214 qpair failed and we were unable to recover it. 00:26:50.215 [2024-05-12 07:06:57.298193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.215 [2024-05-12 07:06:57.298345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.215 [2024-05-12 07:06:57.298370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.215 [2024-05-12 07:06:57.298384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.215 [2024-05-12 07:06:57.298396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.215 [2024-05-12 07:06:57.298423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.215 qpair failed and we were unable to recover it. 00:26:50.215 [2024-05-12 07:06:57.308206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.215 [2024-05-12 07:06:57.308353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.215 [2024-05-12 07:06:57.308378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.215 [2024-05-12 07:06:57.308392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.215 [2024-05-12 07:06:57.308404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.215 [2024-05-12 07:06:57.308431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.215 qpair failed and we were unable to recover it. 00:26:50.215 [2024-05-12 07:06:57.318269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.215 [2024-05-12 07:06:57.318419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.215 [2024-05-12 07:06:57.318444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.215 [2024-05-12 07:06:57.318462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.215 [2024-05-12 07:06:57.318474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.215 [2024-05-12 07:06:57.318502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.215 qpair failed and we were unable to recover it. 00:26:50.215 [2024-05-12 07:06:57.328309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.215 [2024-05-12 07:06:57.328463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.215 [2024-05-12 07:06:57.328487] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.215 [2024-05-12 07:06:57.328502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.215 [2024-05-12 07:06:57.328514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.215 [2024-05-12 07:06:57.328542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.215 qpair failed and we were unable to recover it. 00:26:50.215 [2024-05-12 07:06:57.338344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.215 [2024-05-12 07:06:57.338495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.215 [2024-05-12 07:06:57.338520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.215 [2024-05-12 07:06:57.338534] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.215 [2024-05-12 07:06:57.338546] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.215 [2024-05-12 07:06:57.338574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.215 qpair failed and we were unable to recover it. 00:26:50.476 [2024-05-12 07:06:57.348343] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.476 [2024-05-12 07:06:57.348515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.476 [2024-05-12 07:06:57.348540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.476 [2024-05-12 07:06:57.348560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.476 [2024-05-12 07:06:57.348573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.476 [2024-05-12 07:06:57.348601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.476 qpair failed and we were unable to recover it. 00:26:50.476 [2024-05-12 07:06:57.358369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.476 [2024-05-12 07:06:57.358518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.358544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.358558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.358570] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.358597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.368418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.368603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.368629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.368647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.368661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.368689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.378464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.378656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.378682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.378705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.378719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.378747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.388460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.388637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.388662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.388676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.388689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.388726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.398472] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.398636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.398661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.398675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.398688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.398722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.408543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.408701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.408727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.408741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.408753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.408781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.418560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.418734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.418759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.418774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.418786] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.418814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.428562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.428716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.428750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.428766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.428778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.428806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.438609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.438783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.438814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.438829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.438841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.438869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.448620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.448773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.448799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.448814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.448826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.448853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.458690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.458854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.458879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.458893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.458905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.458933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.468719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.468872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.468898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.468912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.468925] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.468952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.478756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.478918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.478944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.478958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.478971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.478999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.488756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.477 [2024-05-12 07:06:57.488902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.477 [2024-05-12 07:06:57.488928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.477 [2024-05-12 07:06:57.488942] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.477 [2024-05-12 07:06:57.488954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.477 [2024-05-12 07:06:57.488982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.477 qpair failed and we were unable to recover it. 00:26:50.477 [2024-05-12 07:06:57.498821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.478 [2024-05-12 07:06:57.498980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.478 [2024-05-12 07:06:57.499005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.478 [2024-05-12 07:06:57.499019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.478 [2024-05-12 07:06:57.499031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.478 [2024-05-12 07:06:57.499059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.478 qpair failed and we were unable to recover it. 00:26:50.478 [2024-05-12 07:06:57.508830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.478 [2024-05-12 07:06:57.509001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.478 [2024-05-12 07:06:57.509026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.478 [2024-05-12 07:06:57.509040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.478 [2024-05-12 07:06:57.509052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.478 [2024-05-12 07:06:57.509079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.478 qpair failed and we were unable to recover it. 00:26:50.478 [2024-05-12 07:06:57.518881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.478 [2024-05-12 07:06:57.519040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.478 [2024-05-12 07:06:57.519065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.478 [2024-05-12 07:06:57.519079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.478 [2024-05-12 07:06:57.519092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.478 [2024-05-12 07:06:57.519119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.478 qpair failed and we were unable to recover it. 00:26:50.478 [2024-05-12 07:06:57.528874] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.478 [2024-05-12 07:06:57.529027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.478 [2024-05-12 07:06:57.529056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.478 [2024-05-12 07:06:57.529071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.478 [2024-05-12 07:06:57.529084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.478 [2024-05-12 07:06:57.529112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.478 qpair failed and we were unable to recover it. 00:26:50.478 [2024-05-12 07:06:57.538909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.478 [2024-05-12 07:06:57.539080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.478 [2024-05-12 07:06:57.539104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.478 [2024-05-12 07:06:57.539118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.478 [2024-05-12 07:06:57.539131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.478 [2024-05-12 07:06:57.539158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.478 qpair failed and we were unable to recover it. 00:26:50.478 [2024-05-12 07:06:57.548945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.478 [2024-05-12 07:06:57.549093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.478 [2024-05-12 07:06:57.549117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.478 [2024-05-12 07:06:57.549132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.478 [2024-05-12 07:06:57.549144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.478 [2024-05-12 07:06:57.549172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.478 qpair failed and we were unable to recover it. 00:26:50.478 [2024-05-12 07:06:57.558943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.478 [2024-05-12 07:06:57.559091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.478 [2024-05-12 07:06:57.559116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.478 [2024-05-12 07:06:57.559130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.478 [2024-05-12 07:06:57.559142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.478 [2024-05-12 07:06:57.559170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.478 qpair failed and we were unable to recover it. 00:26:50.478 [2024-05-12 07:06:57.569000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.478 [2024-05-12 07:06:57.569157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.478 [2024-05-12 07:06:57.569182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.478 [2024-05-12 07:06:57.569196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.478 [2024-05-12 07:06:57.569208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.478 [2024-05-12 07:06:57.569240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.478 qpair failed and we were unable to recover it. 00:26:50.478 [2024-05-12 07:06:57.579024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.478 [2024-05-12 07:06:57.579196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.478 [2024-05-12 07:06:57.579229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.478 [2024-05-12 07:06:57.579243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.478 [2024-05-12 07:06:57.579256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.478 [2024-05-12 07:06:57.579284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.478 qpair failed and we were unable to recover it. 00:26:50.478 [2024-05-12 07:06:57.589036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.478 [2024-05-12 07:06:57.589184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.478 [2024-05-12 07:06:57.589209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.478 [2024-05-12 07:06:57.589224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.478 [2024-05-12 07:06:57.589237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.478 [2024-05-12 07:06:57.589264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.478 qpair failed and we were unable to recover it. 00:26:50.478 [2024-05-12 07:06:57.599177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.478 [2024-05-12 07:06:57.599330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.478 [2024-05-12 07:06:57.599357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.478 [2024-05-12 07:06:57.599375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.478 [2024-05-12 07:06:57.599388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.478 [2024-05-12 07:06:57.599416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.478 qpair failed and we were unable to recover it. 00:26:50.740 [2024-05-12 07:06:57.609132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.740 [2024-05-12 07:06:57.609281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.740 [2024-05-12 07:06:57.609307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.740 [2024-05-12 07:06:57.609321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.740 [2024-05-12 07:06:57.609334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.740 [2024-05-12 07:06:57.609361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.740 qpair failed and we were unable to recover it. 00:26:50.740 [2024-05-12 07:06:57.619239] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.740 [2024-05-12 07:06:57.619424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.740 [2024-05-12 07:06:57.619457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.740 [2024-05-12 07:06:57.619475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.740 [2024-05-12 07:06:57.619490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.740 [2024-05-12 07:06:57.619518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.740 qpair failed and we were unable to recover it. 00:26:50.740 [2024-05-12 07:06:57.629166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.740 [2024-05-12 07:06:57.629332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.740 [2024-05-12 07:06:57.629357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.740 [2024-05-12 07:06:57.629371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.740 [2024-05-12 07:06:57.629383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.740 [2024-05-12 07:06:57.629411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.740 qpair failed and we were unable to recover it. 00:26:50.740 [2024-05-12 07:06:57.639219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.740 [2024-05-12 07:06:57.639424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.740 [2024-05-12 07:06:57.639450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.740 [2024-05-12 07:06:57.639464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.740 [2024-05-12 07:06:57.639479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.740 [2024-05-12 07:06:57.639506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.740 qpair failed and we were unable to recover it. 00:26:50.740 [2024-05-12 07:06:57.649286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.740 [2024-05-12 07:06:57.649454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.740 [2024-05-12 07:06:57.649480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.740 [2024-05-12 07:06:57.649494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.740 [2024-05-12 07:06:57.649509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.740 [2024-05-12 07:06:57.649535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.740 qpair failed and we were unable to recover it. 00:26:50.740 [2024-05-12 07:06:57.659292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.740 [2024-05-12 07:06:57.659447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.740 [2024-05-12 07:06:57.659472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.740 [2024-05-12 07:06:57.659486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.740 [2024-05-12 07:06:57.659498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.740 [2024-05-12 07:06:57.659530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.740 qpair failed and we were unable to recover it. 00:26:50.740 [2024-05-12 07:06:57.669331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.740 [2024-05-12 07:06:57.669500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.740 [2024-05-12 07:06:57.669523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.740 [2024-05-12 07:06:57.669537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.740 [2024-05-12 07:06:57.669552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.740 [2024-05-12 07:06:57.669579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.740 qpair failed and we were unable to recover it. 00:26:50.740 [2024-05-12 07:06:57.679365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.740 [2024-05-12 07:06:57.679519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.740 [2024-05-12 07:06:57.679547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.740 [2024-05-12 07:06:57.679561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.740 [2024-05-12 07:06:57.679573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.740 [2024-05-12 07:06:57.679601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.740 qpair failed and we were unable to recover it. 00:26:50.740 [2024-05-12 07:06:57.689362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.740 [2024-05-12 07:06:57.689571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.740 [2024-05-12 07:06:57.689597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.740 [2024-05-12 07:06:57.689612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.740 [2024-05-12 07:06:57.689628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.740 [2024-05-12 07:06:57.689655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.740 qpair failed and we were unable to recover it. 00:26:50.740 [2024-05-12 07:06:57.699423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.740 [2024-05-12 07:06:57.699634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.740 [2024-05-12 07:06:57.699659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.740 [2024-05-12 07:06:57.699673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.740 [2024-05-12 07:06:57.699685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.740 [2024-05-12 07:06:57.699720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.740 qpair failed and we were unable to recover it. 00:26:50.740 [2024-05-12 07:06:57.709458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.740 [2024-05-12 07:06:57.709608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.740 [2024-05-12 07:06:57.709642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.740 [2024-05-12 07:06:57.709659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.740 [2024-05-12 07:06:57.709671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.740 [2024-05-12 07:06:57.709707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.740 qpair failed and we were unable to recover it. 00:26:50.740 [2024-05-12 07:06:57.719439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.740 [2024-05-12 07:06:57.719592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.740 [2024-05-12 07:06:57.719618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.719632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.719645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.719673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.729479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.729632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.729657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.729671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.729684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.729718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.739559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.739755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.739783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.739798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.739810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.739839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.749598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.749775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.749801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.749816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.749829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.749861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.759571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.759734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.759759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.759773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.759785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.759814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.769623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.769783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.769808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.769822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.769835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.769862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.779693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.779856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.779881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.779895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.779908] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.779935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.789645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.789808] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.789833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.789847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.789859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.789886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.799670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.799821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.799851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.799867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.799879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.799907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.809720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.809878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.809903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.809917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.809929] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.809957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.819750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.819944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.819970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.819984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.819997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.820024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.829816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.829971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.829996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.830011] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.830023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.830050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.839847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.840003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.840028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.840043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.840060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.840089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.849833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.850053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.741 [2024-05-12 07:06:57.850078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.741 [2024-05-12 07:06:57.850092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.741 [2024-05-12 07:06:57.850104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.741 [2024-05-12 07:06:57.850132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.741 qpair failed and we were unable to recover it. 00:26:50.741 [2024-05-12 07:06:57.859857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.741 [2024-05-12 07:06:57.860010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.742 [2024-05-12 07:06:57.860034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.742 [2024-05-12 07:06:57.860048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.742 [2024-05-12 07:06:57.860060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:50.742 [2024-05-12 07:06:57.860087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.742 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.869895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.870051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.870076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.870090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.002 [2024-05-12 07:06:57.870104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.002 [2024-05-12 07:06:57.870131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.002 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.879918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.880080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.880105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.880120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.002 [2024-05-12 07:06:57.880132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.002 [2024-05-12 07:06:57.880159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.002 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.889941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.890097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.890127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.890141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.002 [2024-05-12 07:06:57.890154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.002 [2024-05-12 07:06:57.890182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.002 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.900047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.900204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.900230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.900245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.002 [2024-05-12 07:06:57.900258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.002 [2024-05-12 07:06:57.900286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.002 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.910018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.910215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.910240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.910254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.002 [2024-05-12 07:06:57.910266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.002 [2024-05-12 07:06:57.910294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.002 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.920104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.920284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.920309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.920323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.002 [2024-05-12 07:06:57.920335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.002 [2024-05-12 07:06:57.920363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.002 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.930042] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.930200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.930225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.930239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.002 [2024-05-12 07:06:57.930257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.002 [2024-05-12 07:06:57.930285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.002 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.940111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.940275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.940300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.940315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.002 [2024-05-12 07:06:57.940330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.002 [2024-05-12 07:06:57.940357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.002 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.950142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.950298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.950323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.950337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.002 [2024-05-12 07:06:57.950349] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.002 [2024-05-12 07:06:57.950377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.002 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.960167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.960317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.960342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.960356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.002 [2024-05-12 07:06:57.960369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.002 [2024-05-12 07:06:57.960396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.002 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.970206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.970369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.970394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.970408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.002 [2024-05-12 07:06:57.970420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.002 [2024-05-12 07:06:57.970448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.002 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.980226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.980390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.980416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.980430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.002 [2024-05-12 07:06:57.980443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.002 [2024-05-12 07:06:57.980470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.002 qpair failed and we were unable to recover it. 00:26:51.002 [2024-05-12 07:06:57.990253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.002 [2024-05-12 07:06:57.990408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.002 [2024-05-12 07:06:57.990432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.002 [2024-05-12 07:06:57.990446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:57.990458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:57.990486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.000275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.000462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.000487] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.000501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.000513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.000540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.010288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.010436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.010460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.010474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.010487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.010514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.020330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.020480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.020505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.020520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.020537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.020565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.030342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.030497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.030522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.030537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.030549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.030577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.040384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.040535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.040560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.040575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.040587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.040614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.050385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.050532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.050557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.050571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.050584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.050611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.060453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.060605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.060630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.060645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.060658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.060685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.070451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.070606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.070631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.070645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.070657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.070685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.080487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.080680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.080711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.080726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.080739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.080766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.090569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.090762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.090789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.090807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.090820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.090849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.100541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.100694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.100764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.100779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.100791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.100819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.110555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.110709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.110735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.110749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.110766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.110795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.003 [2024-05-12 07:06:58.120602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.003 [2024-05-12 07:06:58.120762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.003 [2024-05-12 07:06:58.120787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.003 [2024-05-12 07:06:58.120801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.003 [2024-05-12 07:06:58.120814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.003 [2024-05-12 07:06:58.120841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.003 qpair failed and we were unable to recover it. 00:26:51.263 [2024-05-12 07:06:58.130650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.263 [2024-05-12 07:06:58.130850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.263 [2024-05-12 07:06:58.130875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.263 [2024-05-12 07:06:58.130889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.263 [2024-05-12 07:06:58.130902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.263 [2024-05-12 07:06:58.130929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-05-12 07:06:58.140669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.263 [2024-05-12 07:06:58.140848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.263 [2024-05-12 07:06:58.140873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.263 [2024-05-12 07:06:58.140890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.263 [2024-05-12 07:06:58.140903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.263 [2024-05-12 07:06:58.140931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-05-12 07:06:58.150689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.263 [2024-05-12 07:06:58.150856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.263 [2024-05-12 07:06:58.150882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.263 [2024-05-12 07:06:58.150896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.263 [2024-05-12 07:06:58.150908] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.263 [2024-05-12 07:06:58.150935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-05-12 07:06:58.160692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.263 [2024-05-12 07:06:58.160850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.263 [2024-05-12 07:06:58.160875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.263 [2024-05-12 07:06:58.160890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.263 [2024-05-12 07:06:58.160902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.263 [2024-05-12 07:06:58.160929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.263 qpair failed and we were unable to recover it. 00:26:51.263 [2024-05-12 07:06:58.170784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.263 [2024-05-12 07:06:58.170939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.263 [2024-05-12 07:06:58.170963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.263 [2024-05-12 07:06:58.170978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.170990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.171017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.180820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.180980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.181005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.181020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.181032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.181059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.190804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.190963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.190988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.191002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.191014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.191042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.200844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.200993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.201019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.201039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.201052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.201080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.210837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.211009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.211035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.211050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.211062] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.211090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.220882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.221056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.221081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.221095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.221107] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.221134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.230935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.231136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.231161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.231176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.231188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.231215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.240935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.241081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.241106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.241120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.241132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.241159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.250963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.251115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.251140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.251154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.251166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.251193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.261044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.261241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.261266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.261280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.261292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.261319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.271039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.271222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.271247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.271262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.271274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.271301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.281078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.281273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.281298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.281323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.281335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.281361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.291146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.291303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.291328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.291350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.291363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.291390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.301098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.301255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.301279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.264 [2024-05-12 07:06:58.301293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.264 [2024-05-12 07:06:58.301305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.264 [2024-05-12 07:06:58.301332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.264 qpair failed and we were unable to recover it. 00:26:51.264 [2024-05-12 07:06:58.311118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.264 [2024-05-12 07:06:58.311264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.264 [2024-05-12 07:06:58.311290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.265 [2024-05-12 07:06:58.311304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.265 [2024-05-12 07:06:58.311316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.265 [2024-05-12 07:06:58.311344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-05-12 07:06:58.321195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.265 [2024-05-12 07:06:58.321404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.265 [2024-05-12 07:06:58.321431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.265 [2024-05-12 07:06:58.321445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.265 [2024-05-12 07:06:58.321461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.265 [2024-05-12 07:06:58.321490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-05-12 07:06:58.331267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.265 [2024-05-12 07:06:58.331459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.265 [2024-05-12 07:06:58.331484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.265 [2024-05-12 07:06:58.331499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.265 [2024-05-12 07:06:58.331511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.265 [2024-05-12 07:06:58.331539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-05-12 07:06:58.341224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.265 [2024-05-12 07:06:58.341378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.265 [2024-05-12 07:06:58.341403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.265 [2024-05-12 07:06:58.341417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.265 [2024-05-12 07:06:58.341429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.265 [2024-05-12 07:06:58.341457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-05-12 07:06:58.351253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.265 [2024-05-12 07:06:58.351401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.265 [2024-05-12 07:06:58.351426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.265 [2024-05-12 07:06:58.351441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.265 [2024-05-12 07:06:58.351453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.265 [2024-05-12 07:06:58.351480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-05-12 07:06:58.361292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.265 [2024-05-12 07:06:58.361450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.265 [2024-05-12 07:06:58.361478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.265 [2024-05-12 07:06:58.361494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.265 [2024-05-12 07:06:58.361506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.265 [2024-05-12 07:06:58.361533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-05-12 07:06:58.371328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.265 [2024-05-12 07:06:58.371491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.265 [2024-05-12 07:06:58.371517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.265 [2024-05-12 07:06:58.371531] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.265 [2024-05-12 07:06:58.371546] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.265 [2024-05-12 07:06:58.371574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.265 [2024-05-12 07:06:58.381382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.265 [2024-05-12 07:06:58.381573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.265 [2024-05-12 07:06:58.381599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.265 [2024-05-12 07:06:58.381619] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.265 [2024-05-12 07:06:58.381632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.265 [2024-05-12 07:06:58.381660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.265 qpair failed and we were unable to recover it. 00:26:51.526 [2024-05-12 07:06:58.391358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.526 [2024-05-12 07:06:58.391510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.526 [2024-05-12 07:06:58.391536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.526 [2024-05-12 07:06:58.391550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.526 [2024-05-12 07:06:58.391562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.526 [2024-05-12 07:06:58.391589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.526 qpair failed and we were unable to recover it. 00:26:51.526 [2024-05-12 07:06:58.401368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.526 [2024-05-12 07:06:58.401527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.526 [2024-05-12 07:06:58.401552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.526 [2024-05-12 07:06:58.401567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.526 [2024-05-12 07:06:58.401579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.526 [2024-05-12 07:06:58.401606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.526 qpair failed and we were unable to recover it. 00:26:51.526 [2024-05-12 07:06:58.411407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.526 [2024-05-12 07:06:58.411555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.526 [2024-05-12 07:06:58.411580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.526 [2024-05-12 07:06:58.411594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.526 [2024-05-12 07:06:58.411606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.526 [2024-05-12 07:06:58.411633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.526 qpair failed and we were unable to recover it. 00:26:51.526 [2024-05-12 07:06:58.421438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.526 [2024-05-12 07:06:58.421640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.526 [2024-05-12 07:06:58.421665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.526 [2024-05-12 07:06:58.421680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.526 [2024-05-12 07:06:58.421692] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.526 [2024-05-12 07:06:58.421730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.526 qpair failed and we were unable to recover it. 00:26:51.526 [2024-05-12 07:06:58.431461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.526 [2024-05-12 07:06:58.431614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.526 [2024-05-12 07:06:58.431639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.526 [2024-05-12 07:06:58.431654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.526 [2024-05-12 07:06:58.431666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.526 [2024-05-12 07:06:58.431693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.526 qpair failed and we were unable to recover it. 00:26:51.526 [2024-05-12 07:06:58.441494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.526 [2024-05-12 07:06:58.441661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.526 [2024-05-12 07:06:58.441686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.526 [2024-05-12 07:06:58.441708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.526 [2024-05-12 07:06:58.441722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.526 [2024-05-12 07:06:58.441749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.526 qpair failed and we were unable to recover it. 00:26:51.526 [2024-05-12 07:06:58.451536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.526 [2024-05-12 07:06:58.451680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.526 [2024-05-12 07:06:58.451713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.526 [2024-05-12 07:06:58.451728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.526 [2024-05-12 07:06:58.451740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.526 [2024-05-12 07:06:58.451768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.526 qpair failed and we were unable to recover it. 00:26:51.526 [2024-05-12 07:06:58.461667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.526 [2024-05-12 07:06:58.461873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.526 [2024-05-12 07:06:58.461897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.526 [2024-05-12 07:06:58.461911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.526 [2024-05-12 07:06:58.461924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.526 [2024-05-12 07:06:58.461951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.526 qpair failed and we were unable to recover it. 00:26:51.526 [2024-05-12 07:06:58.471607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.526 [2024-05-12 07:06:58.471775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.526 [2024-05-12 07:06:58.471800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.526 [2024-05-12 07:06:58.471819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.526 [2024-05-12 07:06:58.471832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.526 [2024-05-12 07:06:58.471859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.526 qpair failed and we were unable to recover it. 00:26:51.526 [2024-05-12 07:06:58.481639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.526 [2024-05-12 07:06:58.481801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.526 [2024-05-12 07:06:58.481828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.526 [2024-05-12 07:06:58.481842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.526 [2024-05-12 07:06:58.481854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.526 [2024-05-12 07:06:58.481881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.526 qpair failed and we were unable to recover it. 00:26:51.526 [2024-05-12 07:06:58.491620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.526 [2024-05-12 07:06:58.491774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.526 [2024-05-12 07:06:58.491799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.526 [2024-05-12 07:06:58.491814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.526 [2024-05-12 07:06:58.491826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.526 [2024-05-12 07:06:58.491854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.526 qpair failed and we were unable to recover it. 00:26:51.526 [2024-05-12 07:06:58.501720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.526 [2024-05-12 07:06:58.501875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.526 [2024-05-12 07:06:58.501900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.501914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.501927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.501954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.511710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.511882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.511907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.511921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.511933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.511961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.521782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.521984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.522017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.522031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.522044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.522071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.531805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.531950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.531975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.531989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.532001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.532040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.541818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.541975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.542000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.542015] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.542027] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.542055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.551826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.551984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.552015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.552029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.552041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.552068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.561853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.562049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.562086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.562106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.562119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.562146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.571910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.572083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.572107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.572121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.572134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.572161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.581951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.582114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.582139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.582154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.582166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.582193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.591970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.592133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.592158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.592172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.592184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.592212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.601952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.602104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.602128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.602143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.602155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.602182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.612073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.612253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.612279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.612293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.612305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.612332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.622040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.622234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.622258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.622272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.622285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.622312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.632057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.632291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.527 [2024-05-12 07:06:58.632315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.527 [2024-05-12 07:06:58.632330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.527 [2024-05-12 07:06:58.632342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.527 [2024-05-12 07:06:58.632369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.527 qpair failed and we were unable to recover it. 00:26:51.527 [2024-05-12 07:06:58.642100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.527 [2024-05-12 07:06:58.642294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.528 [2024-05-12 07:06:58.642319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.528 [2024-05-12 07:06:58.642333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.528 [2024-05-12 07:06:58.642345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.528 [2024-05-12 07:06:58.642372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.528 qpair failed and we were unable to recover it. 00:26:51.528 [2024-05-12 07:06:58.652133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.528 [2024-05-12 07:06:58.652282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.528 [2024-05-12 07:06:58.652312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.528 [2024-05-12 07:06:58.652328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.528 [2024-05-12 07:06:58.652340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.528 [2024-05-12 07:06:58.652367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.528 qpair failed and we were unable to recover it. 00:26:51.789 [2024-05-12 07:06:58.662132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.789 [2024-05-12 07:06:58.662283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.789 [2024-05-12 07:06:58.662308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.789 [2024-05-12 07:06:58.662322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.789 [2024-05-12 07:06:58.662334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.789 [2024-05-12 07:06:58.662361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.789 qpair failed and we were unable to recover it. 00:26:51.789 [2024-05-12 07:06:58.672235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.789 [2024-05-12 07:06:58.672431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.789 [2024-05-12 07:06:58.672454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.789 [2024-05-12 07:06:58.672468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.789 [2024-05-12 07:06:58.672480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.789 [2024-05-12 07:06:58.672507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.789 qpair failed and we were unable to recover it. 00:26:51.789 [2024-05-12 07:06:58.682230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.789 [2024-05-12 07:06:58.682418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.789 [2024-05-12 07:06:58.682443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.789 [2024-05-12 07:06:58.682457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.789 [2024-05-12 07:06:58.682470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.789 [2024-05-12 07:06:58.682497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.692211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.692357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.692382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.692396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.692408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.692435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.702293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.702446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.702471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.702485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.702497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.702525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.712349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.712536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.712560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.712574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.712586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.712613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.722329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.722509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.722534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.722549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.722561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.722588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.732364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.732539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.732564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.732578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.732591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.732618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.742412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.742613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.742643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.742659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.742671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.742706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.752441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.752603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.752629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.752643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.752655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.752683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.762463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.762616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.762641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.762655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.762668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.762703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.772544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.772688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.772723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.772738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.772750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.772777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.782489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.782638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.782663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.782677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.782688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.782728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.792533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.792688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.792719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.792734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.792746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.792773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.802542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.802692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.802722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.802737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.802749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.802776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.812575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.812735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.812760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.790 [2024-05-12 07:06:58.812774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.790 [2024-05-12 07:06:58.812787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.790 [2024-05-12 07:06:58.812814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.790 qpair failed and we were unable to recover it. 00:26:51.790 [2024-05-12 07:06:58.822608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.790 [2024-05-12 07:06:58.822771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.790 [2024-05-12 07:06:58.822796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.791 [2024-05-12 07:06:58.822810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.791 [2024-05-12 07:06:58.822823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.791 [2024-05-12 07:06:58.822850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.791 qpair failed and we were unable to recover it. 00:26:51.791 [2024-05-12 07:06:58.832745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.791 [2024-05-12 07:06:58.832916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.791 [2024-05-12 07:06:58.832946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.791 [2024-05-12 07:06:58.832961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.791 [2024-05-12 07:06:58.832973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.791 [2024-05-12 07:06:58.833001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.791 qpair failed and we were unable to recover it. 00:26:51.791 [2024-05-12 07:06:58.842682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.791 [2024-05-12 07:06:58.842849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.791 [2024-05-12 07:06:58.842874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.791 [2024-05-12 07:06:58.842888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.791 [2024-05-12 07:06:58.842900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.791 [2024-05-12 07:06:58.842927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.791 qpair failed and we were unable to recover it. 00:26:51.791 [2024-05-12 07:06:58.852776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.791 [2024-05-12 07:06:58.852948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.791 [2024-05-12 07:06:58.852973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.791 [2024-05-12 07:06:58.852987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.791 [2024-05-12 07:06:58.852999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.791 [2024-05-12 07:06:58.853026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.791 qpair failed and we were unable to recover it. 00:26:51.791 [2024-05-12 07:06:58.862779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.791 [2024-05-12 07:06:58.862932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.791 [2024-05-12 07:06:58.862957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.791 [2024-05-12 07:06:58.862971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.791 [2024-05-12 07:06:58.862983] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.791 [2024-05-12 07:06:58.863010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.791 qpair failed and we were unable to recover it. 00:26:51.791 [2024-05-12 07:06:58.872798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.791 [2024-05-12 07:06:58.872959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.791 [2024-05-12 07:06:58.872983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.791 [2024-05-12 07:06:58.872997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.791 [2024-05-12 07:06:58.873009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.791 [2024-05-12 07:06:58.873042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.791 qpair failed and we were unable to recover it. 00:26:51.791 [2024-05-12 07:06:58.882869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.791 [2024-05-12 07:06:58.883033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.791 [2024-05-12 07:06:58.883059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.791 [2024-05-12 07:06:58.883073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.791 [2024-05-12 07:06:58.883086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.791 [2024-05-12 07:06:58.883113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.791 qpair failed and we were unable to recover it. 00:26:51.791 [2024-05-12 07:06:58.892896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.791 [2024-05-12 07:06:58.893103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.791 [2024-05-12 07:06:58.893128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.791 [2024-05-12 07:06:58.893142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.791 [2024-05-12 07:06:58.893155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.791 [2024-05-12 07:06:58.893182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.791 qpair failed and we were unable to recover it. 00:26:51.791 [2024-05-12 07:06:58.902933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.791 [2024-05-12 07:06:58.903117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.791 [2024-05-12 07:06:58.903142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.791 [2024-05-12 07:06:58.903156] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.791 [2024-05-12 07:06:58.903169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.791 [2024-05-12 07:06:58.903197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.791 qpair failed and we were unable to recover it. 00:26:51.791 [2024-05-12 07:06:58.912944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.791 [2024-05-12 07:06:58.913103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.791 [2024-05-12 07:06:58.913127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.791 [2024-05-12 07:06:58.913141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.791 [2024-05-12 07:06:58.913155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:51.791 [2024-05-12 07:06:58.913182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:51.791 qpair failed and we were unable to recover it. 00:26:52.053 [2024-05-12 07:06:58.922900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.053 [2024-05-12 07:06:58.923056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.053 [2024-05-12 07:06:58.923089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.053 [2024-05-12 07:06:58.923108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.053 [2024-05-12 07:06:58.923121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.053 [2024-05-12 07:06:58.923149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.053 qpair failed and we were unable to recover it. 00:26:52.053 [2024-05-12 07:06:58.932924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.053 [2024-05-12 07:06:58.933074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.053 [2024-05-12 07:06:58.933099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.053 [2024-05-12 07:06:58.933113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.053 [2024-05-12 07:06:58.933128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.053 [2024-05-12 07:06:58.933156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.053 qpair failed and we were unable to recover it. 00:26:52.053 [2024-05-12 07:06:58.942945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.053 [2024-05-12 07:06:58.943111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.053 [2024-05-12 07:06:58.943136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.053 [2024-05-12 07:06:58.943150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.053 [2024-05-12 07:06:58.943163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.053 [2024-05-12 07:06:58.943191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.053 qpair failed and we were unable to recover it. 00:26:52.053 [2024-05-12 07:06:58.952979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.053 [2024-05-12 07:06:58.953132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.053 [2024-05-12 07:06:58.953158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.053 [2024-05-12 07:06:58.953172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.053 [2024-05-12 07:06:58.953185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.053 [2024-05-12 07:06:58.953213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.053 qpair failed and we were unable to recover it. 00:26:52.053 [2024-05-12 07:06:58.963123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:58.963269] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:58.963294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:58.963309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:58.963322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:58.963359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:58.973022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:58.973172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:58.973197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:58.973211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:58.973224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:58.973252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:58.983062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:58.983214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:58.983239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:58.983253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:58.983266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:58.983294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:58.993128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:58.993304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:58.993329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:58.993344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:58.993357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:58.993385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:59.003185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:59.003336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:59.003360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:59.003375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:59.003388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:59.003416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:59.013156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:59.013345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:59.013375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:59.013390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:59.013403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:59.013431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:59.023188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:59.023341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:59.023366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:59.023381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:59.023394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:59.023421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:59.033198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:59.033347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:59.033373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:59.033387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:59.033400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:59.033428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:59.043269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:59.043438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:59.043463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:59.043477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:59.043490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:59.043518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:59.053272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:59.053424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:59.053450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:59.053464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:59.053482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:59.053514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:59.063292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:59.063445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:59.063470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:59.063486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:59.063499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:59.063529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:59.073321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:59.073473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:59.073497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:59.073512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:59.073526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:59.073554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:59.083368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:59.083518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:59.083543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:59.083557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.054 [2024-05-12 07:06:59.083571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.054 [2024-05-12 07:06:59.083598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.054 qpair failed and we were unable to recover it. 00:26:52.054 [2024-05-12 07:06:59.093413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.054 [2024-05-12 07:06:59.093573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.054 [2024-05-12 07:06:59.093598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.054 [2024-05-12 07:06:59.093612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.055 [2024-05-12 07:06:59.093626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.055 [2024-05-12 07:06:59.093653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.055 qpair failed and we were unable to recover it. 00:26:52.055 [2024-05-12 07:06:59.103420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.055 [2024-05-12 07:06:59.103575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.055 [2024-05-12 07:06:59.103604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.055 [2024-05-12 07:06:59.103620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.055 [2024-05-12 07:06:59.103633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.055 [2024-05-12 07:06:59.103660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.055 qpair failed and we were unable to recover it. 00:26:52.055 [2024-05-12 07:06:59.113418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.055 [2024-05-12 07:06:59.113570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.055 [2024-05-12 07:06:59.113595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.055 [2024-05-12 07:06:59.113609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.055 [2024-05-12 07:06:59.113623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.055 [2024-05-12 07:06:59.113650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.055 qpair failed and we were unable to recover it. 00:26:52.055 [2024-05-12 07:06:59.123464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.055 [2024-05-12 07:06:59.123611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.055 [2024-05-12 07:06:59.123635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.055 [2024-05-12 07:06:59.123650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.055 [2024-05-12 07:06:59.123663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.055 [2024-05-12 07:06:59.123691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.055 qpair failed and we were unable to recover it. 00:26:52.055 [2024-05-12 07:06:59.133502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.055 [2024-05-12 07:06:59.133654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.055 [2024-05-12 07:06:59.133679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.055 [2024-05-12 07:06:59.133693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.055 [2024-05-12 07:06:59.133713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.055 [2024-05-12 07:06:59.133744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.055 qpair failed and we were unable to recover it. 00:26:52.055 [2024-05-12 07:06:59.143526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.055 [2024-05-12 07:06:59.143680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.055 [2024-05-12 07:06:59.143713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.055 [2024-05-12 07:06:59.143728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.055 [2024-05-12 07:06:59.143747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.055 [2024-05-12 07:06:59.143777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.055 qpair failed and we were unable to recover it. 00:26:52.055 [2024-05-12 07:06:59.153603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.055 [2024-05-12 07:06:59.153764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.055 [2024-05-12 07:06:59.153790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.055 [2024-05-12 07:06:59.153804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.055 [2024-05-12 07:06:59.153817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.055 [2024-05-12 07:06:59.153845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.055 qpair failed and we were unable to recover it. 00:26:52.055 [2024-05-12 07:06:59.163576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.055 [2024-05-12 07:06:59.163732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.055 [2024-05-12 07:06:59.163757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.055 [2024-05-12 07:06:59.163772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.055 [2024-05-12 07:06:59.163785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.055 [2024-05-12 07:06:59.163812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.055 qpair failed and we were unable to recover it. 00:26:52.055 [2024-05-12 07:06:59.173616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.055 [2024-05-12 07:06:59.173791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.055 [2024-05-12 07:06:59.173815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.055 [2024-05-12 07:06:59.173829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.055 [2024-05-12 07:06:59.173843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.055 [2024-05-12 07:06:59.173870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.055 qpair failed and we were unable to recover it. 00:26:52.319 [2024-05-12 07:06:59.183691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.319 [2024-05-12 07:06:59.183889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.319 [2024-05-12 07:06:59.183915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.319 [2024-05-12 07:06:59.183929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.319 [2024-05-12 07:06:59.183943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.319 [2024-05-12 07:06:59.183970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.319 qpair failed and we were unable to recover it. 00:26:52.319 [2024-05-12 07:06:59.193708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.319 [2024-05-12 07:06:59.193866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.319 [2024-05-12 07:06:59.193891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.319 [2024-05-12 07:06:59.193905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.319 [2024-05-12 07:06:59.193919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.319 [2024-05-12 07:06:59.193946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.319 qpair failed and we were unable to recover it. 00:26:52.319 [2024-05-12 07:06:59.203766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.319 [2024-05-12 07:06:59.203920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.319 [2024-05-12 07:06:59.203946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.319 [2024-05-12 07:06:59.203960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.319 [2024-05-12 07:06:59.203974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.319 [2024-05-12 07:06:59.204001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.319 qpair failed and we were unable to recover it. 00:26:52.319 [2024-05-12 07:06:59.213827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.319 [2024-05-12 07:06:59.213980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.319 [2024-05-12 07:06:59.214004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.319 [2024-05-12 07:06:59.214018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.319 [2024-05-12 07:06:59.214031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.319 [2024-05-12 07:06:59.214059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.319 qpair failed and we were unable to recover it. 00:26:52.319 [2024-05-12 07:06:59.223775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.319 [2024-05-12 07:06:59.223967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.319 [2024-05-12 07:06:59.223992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.319 [2024-05-12 07:06:59.224007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.319 [2024-05-12 07:06:59.224020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.319 [2024-05-12 07:06:59.224048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.319 qpair failed and we were unable to recover it. 00:26:52.319 [2024-05-12 07:06:59.233793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.319 [2024-05-12 07:06:59.233943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.319 [2024-05-12 07:06:59.233967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.319 [2024-05-12 07:06:59.233981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.319 [2024-05-12 07:06:59.234000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.319 [2024-05-12 07:06:59.234029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.319 qpair failed and we were unable to recover it. 00:26:52.319 [2024-05-12 07:06:59.243910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.319 [2024-05-12 07:06:59.244063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.319 [2024-05-12 07:06:59.244088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.319 [2024-05-12 07:06:59.244102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.319 [2024-05-12 07:06:59.244115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.319 [2024-05-12 07:06:59.244142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.319 qpair failed and we were unable to recover it. 00:26:52.319 [2024-05-12 07:06:59.253884] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.319 [2024-05-12 07:06:59.254086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.319 [2024-05-12 07:06:59.254111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.319 [2024-05-12 07:06:59.254125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.319 [2024-05-12 07:06:59.254139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.319 [2024-05-12 07:06:59.254166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.319 qpair failed and we were unable to recover it. 00:26:52.319 [2024-05-12 07:06:59.263882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.319 [2024-05-12 07:06:59.264038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.319 [2024-05-12 07:06:59.264063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.319 [2024-05-12 07:06:59.264077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.319 [2024-05-12 07:06:59.264090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.264117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.273961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.274146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.274174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.274192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.274206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.274235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.283933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.284106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.284132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.284146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.284160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.284188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.294034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.294189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.294213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.294227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.294240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.294267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.304051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.304225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.304250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.304263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.304276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.304305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.314029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.314253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.314277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.314291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.314304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.314332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.324102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.324259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.324283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.324298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.324316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.324345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.334133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.334302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.334327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.334341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.334354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.334382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.344205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.344359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.344383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.344397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.344410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.344437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.354127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.354279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.354304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.354319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.354333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.354361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.364152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.364294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.364318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.364333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.364346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.364374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.374184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.374335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.374360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.374375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.374388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.374415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.384219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.384369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.384393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.384408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.384421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.384448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.394283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.394440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.394464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.394479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.320 [2024-05-12 07:06:59.394492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.320 [2024-05-12 07:06:59.394520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.320 qpair failed and we were unable to recover it. 00:26:52.320 [2024-05-12 07:06:59.404291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.320 [2024-05-12 07:06:59.404445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.320 [2024-05-12 07:06:59.404470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.320 [2024-05-12 07:06:59.404485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.321 [2024-05-12 07:06:59.404498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.321 [2024-05-12 07:06:59.404526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.321 qpair failed and we were unable to recover it. 00:26:52.321 [2024-05-12 07:06:59.414400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.321 [2024-05-12 07:06:59.414547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.321 [2024-05-12 07:06:59.414572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.321 [2024-05-12 07:06:59.414593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.321 [2024-05-12 07:06:59.414607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.321 [2024-05-12 07:06:59.414635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.321 qpair failed and we were unable to recover it. 00:26:52.321 [2024-05-12 07:06:59.424352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.321 [2024-05-12 07:06:59.424506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.321 [2024-05-12 07:06:59.424530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.321 [2024-05-12 07:06:59.424544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.321 [2024-05-12 07:06:59.424558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.321 [2024-05-12 07:06:59.424585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.321 qpair failed and we were unable to recover it. 00:26:52.321 [2024-05-12 07:06:59.434375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.321 [2024-05-12 07:06:59.434521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.321 [2024-05-12 07:06:59.434546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.321 [2024-05-12 07:06:59.434560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.321 [2024-05-12 07:06:59.434573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.321 [2024-05-12 07:06:59.434602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.321 qpair failed and we were unable to recover it. 00:26:52.321 [2024-05-12 07:06:59.444411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.321 [2024-05-12 07:06:59.444590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.321 [2024-05-12 07:06:59.444615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.321 [2024-05-12 07:06:59.444630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.321 [2024-05-12 07:06:59.444642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.321 [2024-05-12 07:06:59.444669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.321 qpair failed and we were unable to recover it. 00:26:52.581 [2024-05-12 07:06:59.454431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.581 [2024-05-12 07:06:59.454580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.581 [2024-05-12 07:06:59.454605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.581 [2024-05-12 07:06:59.454620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.581 [2024-05-12 07:06:59.454633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.581 [2024-05-12 07:06:59.454660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.581 qpair failed and we were unable to recover it. 00:26:52.581 [2024-05-12 07:06:59.464516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.581 [2024-05-12 07:06:59.464710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.581 [2024-05-12 07:06:59.464736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.581 [2024-05-12 07:06:59.464750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.581 [2024-05-12 07:06:59.464763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.581 [2024-05-12 07:06:59.464791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.581 qpair failed and we were unable to recover it. 00:26:52.581 [2024-05-12 07:06:59.474484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.581 [2024-05-12 07:06:59.474637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.581 [2024-05-12 07:06:59.474662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.581 [2024-05-12 07:06:59.474676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.581 [2024-05-12 07:06:59.474689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.581 [2024-05-12 07:06:59.474724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.581 qpair failed and we were unable to recover it. 00:26:52.581 [2024-05-12 07:06:59.484517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.581 [2024-05-12 07:06:59.484682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.581 [2024-05-12 07:06:59.484713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.581 [2024-05-12 07:06:59.484729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.581 [2024-05-12 07:06:59.484742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.581 [2024-05-12 07:06:59.484770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.581 qpair failed and we were unable to recover it. 00:26:52.581 [2024-05-12 07:06:59.494566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.581 [2024-05-12 07:06:59.494717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.581 [2024-05-12 07:06:59.494742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.581 [2024-05-12 07:06:59.494756] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.581 [2024-05-12 07:06:59.494769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.581 [2024-05-12 07:06:59.494797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.581 qpair failed and we were unable to recover it. 00:26:52.581 [2024-05-12 07:06:59.504605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.581 [2024-05-12 07:06:59.504799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.581 [2024-05-12 07:06:59.504824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.504845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.504859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.504887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.514609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.514770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.514796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.514810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.514823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.514851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.524649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.524832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.524856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.524871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.524884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.524912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.534683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.534867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.534891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.534905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.534918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.534946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.544687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.544848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.544873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.544888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.544901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.544929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.554725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.554878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.554903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.554917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.554930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.554958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.564805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.564963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.564989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.565004] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.565017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.565045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.574781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.574928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.574953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.574967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.574980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.575008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.584802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.584972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.584996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.585010] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.585023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.585051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.594858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.595016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.595041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.595063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.595078] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.595106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.604884] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.605036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.605061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.605075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.605088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.605116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.614902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.615059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.615083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.615098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.615111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.615139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.624939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.625155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.625180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.625194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.625206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.625233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.634955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.635104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.582 [2024-05-12 07:06:59.635128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.582 [2024-05-12 07:06:59.635142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.582 [2024-05-12 07:06:59.635155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.582 [2024-05-12 07:06:59.635182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.582 qpair failed and we were unable to recover it. 00:26:52.582 [2024-05-12 07:06:59.644977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.582 [2024-05-12 07:06:59.645124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.583 [2024-05-12 07:06:59.645149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.583 [2024-05-12 07:06:59.645163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.583 [2024-05-12 07:06:59.645177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.583 [2024-05-12 07:06:59.645204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.583 qpair failed and we were unable to recover it. 00:26:52.583 [2024-05-12 07:06:59.655015] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.583 [2024-05-12 07:06:59.655183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.583 [2024-05-12 07:06:59.655207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.583 [2024-05-12 07:06:59.655221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.583 [2024-05-12 07:06:59.655234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.583 [2024-05-12 07:06:59.655264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.583 qpair failed and we were unable to recover it. 00:26:52.583 [2024-05-12 07:06:59.665059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.583 [2024-05-12 07:06:59.665221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.583 [2024-05-12 07:06:59.665247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.583 [2024-05-12 07:06:59.665261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.583 [2024-05-12 07:06:59.665274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.583 [2024-05-12 07:06:59.665301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.583 qpair failed and we were unable to recover it. 00:26:52.583 [2024-05-12 07:06:59.675089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.583 [2024-05-12 07:06:59.675274] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.583 [2024-05-12 07:06:59.675298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.583 [2024-05-12 07:06:59.675311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.583 [2024-05-12 07:06:59.675329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.583 [2024-05-12 07:06:59.675356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.583 qpair failed and we were unable to recover it. 00:26:52.583 [2024-05-12 07:06:59.685156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.583 [2024-05-12 07:06:59.685313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.583 [2024-05-12 07:06:59.685338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.583 [2024-05-12 07:06:59.685358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.583 [2024-05-12 07:06:59.685372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.583 [2024-05-12 07:06:59.685401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.583 qpair failed and we were unable to recover it. 00:26:52.583 [2024-05-12 07:06:59.695145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.583 [2024-05-12 07:06:59.695293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.583 [2024-05-12 07:06:59.695318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.583 [2024-05-12 07:06:59.695333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.583 [2024-05-12 07:06:59.695346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.583 [2024-05-12 07:06:59.695373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.583 qpair failed and we were unable to recover it. 00:26:52.583 [2024-05-12 07:06:59.705192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.583 [2024-05-12 07:06:59.705350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.583 [2024-05-12 07:06:59.705375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.583 [2024-05-12 07:06:59.705389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.583 [2024-05-12 07:06:59.705402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.583 [2024-05-12 07:06:59.705430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.583 qpair failed and we were unable to recover it. 00:26:52.842 [2024-05-12 07:06:59.715193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.842 [2024-05-12 07:06:59.715346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.842 [2024-05-12 07:06:59.715371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.842 [2024-05-12 07:06:59.715385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.842 [2024-05-12 07:06:59.715398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.842 [2024-05-12 07:06:59.715426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.842 qpair failed and we were unable to recover it. 00:26:52.842 [2024-05-12 07:06:59.725231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.842 [2024-05-12 07:06:59.725384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.842 [2024-05-12 07:06:59.725410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.842 [2024-05-12 07:06:59.725424] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.842 [2024-05-12 07:06:59.725437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.842 [2024-05-12 07:06:59.725465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.842 qpair failed and we were unable to recover it. 00:26:52.842 [2024-05-12 07:06:59.735269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.842 [2024-05-12 07:06:59.735418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.842 [2024-05-12 07:06:59.735443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.842 [2024-05-12 07:06:59.735457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.842 [2024-05-12 07:06:59.735470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.842 [2024-05-12 07:06:59.735497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.842 qpair failed and we were unable to recover it. 00:26:52.842 [2024-05-12 07:06:59.745269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.842 [2024-05-12 07:06:59.745425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.842 [2024-05-12 07:06:59.745450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.842 [2024-05-12 07:06:59.745464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.842 [2024-05-12 07:06:59.745477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.842 [2024-05-12 07:06:59.745504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.842 qpair failed and we were unable to recover it. 00:26:52.842 [2024-05-12 07:06:59.755366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.842 [2024-05-12 07:06:59.755519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.842 [2024-05-12 07:06:59.755545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.842 [2024-05-12 07:06:59.755559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.842 [2024-05-12 07:06:59.755572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.842 [2024-05-12 07:06:59.755600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.842 qpair failed and we were unable to recover it. 00:26:52.842 [2024-05-12 07:06:59.765377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.842 [2024-05-12 07:06:59.765551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.842 [2024-05-12 07:06:59.765577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.842 [2024-05-12 07:06:59.765592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.842 [2024-05-12 07:06:59.765605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.842 [2024-05-12 07:06:59.765633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.842 qpair failed and we were unable to recover it. 00:26:52.842 [2024-05-12 07:06:59.775349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.842 [2024-05-12 07:06:59.775501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.842 [2024-05-12 07:06:59.775534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.842 [2024-05-12 07:06:59.775550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.842 [2024-05-12 07:06:59.775563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.842 [2024-05-12 07:06:59.775591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.842 qpair failed and we were unable to recover it. 00:26:52.842 [2024-05-12 07:06:59.785410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.842 [2024-05-12 07:06:59.785566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.842 [2024-05-12 07:06:59.785591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.842 [2024-05-12 07:06:59.785605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.842 [2024-05-12 07:06:59.785618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.842 [2024-05-12 07:06:59.785647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.842 qpair failed and we were unable to recover it. 00:26:52.842 [2024-05-12 07:06:59.795444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.842 [2024-05-12 07:06:59.795602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.842 [2024-05-12 07:06:59.795627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.843 [2024-05-12 07:06:59.795642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.843 [2024-05-12 07:06:59.795655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.843 [2024-05-12 07:06:59.795683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.843 qpair failed and we were unable to recover it. 00:26:52.843 [2024-05-12 07:06:59.805447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.843 [2024-05-12 07:06:59.805602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.843 [2024-05-12 07:06:59.805627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.843 [2024-05-12 07:06:59.805641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.843 [2024-05-12 07:06:59.805655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16599f0 00:26:52.843 [2024-05-12 07:06:59.805683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.843 qpair failed and we were unable to recover it. 00:26:52.843 [2024-05-12 07:06:59.815508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.843 [2024-05-12 07:06:59.815662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.843 [2024-05-12 07:06:59.815704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.843 [2024-05-12 07:06:59.815725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.843 [2024-05-12 07:06:59.815740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fac9c000b90 00:26:52.843 [2024-05-12 07:06:59.815772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.843 qpair failed and we were unable to recover it. 00:26:52.843 [2024-05-12 07:06:59.825560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.843 [2024-05-12 07:06:59.825725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.843 [2024-05-12 07:06:59.825753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.843 [2024-05-12 07:06:59.825768] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.843 [2024-05-12 07:06:59.825783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fac9c000b90 00:26:52.843 [2024-05-12 07:06:59.825813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.843 qpair failed and we were unable to recover it. 00:26:52.843 [2024-05-12 07:06:59.835585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.843 [2024-05-12 07:06:59.835764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.843 [2024-05-12 07:06:59.835796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.843 [2024-05-12 07:06:59.835812] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.843 [2024-05-12 07:06:59.835826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7facac000b90 00:26:52.843 [2024-05-12 07:06:59.835859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.843 qpair failed and we were unable to recover it. 00:26:52.843 [2024-05-12 07:06:59.845606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.843 [2024-05-12 07:06:59.845762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.843 [2024-05-12 07:06:59.845789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.843 [2024-05-12 07:06:59.845805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.843 [2024-05-12 07:06:59.845819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7facac000b90 00:26:52.843 [2024-05-12 07:06:59.845849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.843 qpair failed and we were unable to recover it. 00:26:52.843 [2024-05-12 07:06:59.845972] nvme_ctrlr.c:4325:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:52.843 A controller has encountered a failure and is being reset. 00:26:52.843 [2024-05-12 07:06:59.855627] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.843 [2024-05-12 07:06:59.855784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.843 [2024-05-12 07:06:59.855815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.843 [2024-05-12 07:06:59.855832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.843 [2024-05-12 07:06:59.855846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faca4000b90 00:26:52.843 [2024-05-12 07:06:59.855878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.843 qpair failed and we were unable to recover it. 00:26:52.843 [2024-05-12 07:06:59.865684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.843 [2024-05-12 07:06:59.865882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.843 [2024-05-12 07:06:59.865910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.843 [2024-05-12 07:06:59.865925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.843 [2024-05-12 07:06:59.865938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faca4000b90 00:26:52.843 [2024-05-12 07:06:59.865970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.843 qpair failed and we were unable to recover it. 00:26:52.843 [2024-05-12 07:06:59.866069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16674b0 (9): Bad file descriptor 00:26:53.101 Controller properly reset. 00:26:53.101 Initializing NVMe Controllers 00:26:53.101 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:53.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:53.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:53.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:53.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:53.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:53.101 Initialization complete. Launching workers. 00:26:53.101 Starting thread on core 1 00:26:53.101 Starting thread on core 2 00:26:53.101 Starting thread on core 3 00:26:53.101 Starting thread on core 0 00:26:53.101 07:07:00 -- host/target_disconnect.sh@59 -- # sync 00:26:53.102 00:26:53.102 real 0m11.509s 00:26:53.102 user 0m19.697s 00:26:53.102 sys 0m5.744s 00:26:53.102 07:07:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.102 07:07:00 -- common/autotest_common.sh@10 -- # set +x 00:26:53.102 ************************************ 00:26:53.102 END TEST nvmf_target_disconnect_tc2 00:26:53.102 ************************************ 00:26:53.102 07:07:00 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:26:53.102 07:07:00 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:53.102 07:07:00 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:26:53.102 07:07:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:53.102 07:07:00 -- nvmf/common.sh@116 -- # sync 00:26:53.102 07:07:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:53.102 07:07:00 -- nvmf/common.sh@119 -- # set +e 00:26:53.102 07:07:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:53.102 07:07:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:53.102 rmmod nvme_tcp 00:26:53.102 rmmod nvme_fabrics 00:26:53.102 rmmod nvme_keyring 00:26:53.102 07:07:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:53.102 07:07:00 -- nvmf/common.sh@123 -- # set -e 00:26:53.102 07:07:00 -- nvmf/common.sh@124 -- # return 0 00:26:53.102 07:07:00 -- nvmf/common.sh@477 -- # '[' -n 3147796 ']' 00:26:53.102 07:07:00 -- nvmf/common.sh@478 -- # killprocess 3147796 00:26:53.102 07:07:00 -- common/autotest_common.sh@926 -- # '[' -z 3147796 ']' 00:26:53.102 07:07:00 -- common/autotest_common.sh@930 -- # kill -0 3147796 00:26:53.102 07:07:00 -- common/autotest_common.sh@931 -- # uname 00:26:53.102 07:07:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:53.102 07:07:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3147796 00:26:53.102 07:07:00 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:26:53.102 07:07:00 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:26:53.102 07:07:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3147796' 00:26:53.102 killing process with pid 3147796 00:26:53.102 07:07:00 -- common/autotest_common.sh@945 -- # kill 3147796 00:26:53.102 07:07:00 -- common/autotest_common.sh@950 -- # wait 3147796 00:26:53.360 07:07:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:53.360 07:07:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:53.360 07:07:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:53.360 07:07:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:53.360 07:07:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:53.360 07:07:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.360 07:07:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:53.360 07:07:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.890 07:07:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:55.890 00:26:55.890 real 0m16.219s 00:26:55.890 user 0m46.011s 00:26:55.890 sys 0m7.667s 00:26:55.890 07:07:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.890 07:07:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.890 ************************************ 00:26:55.890 END TEST nvmf_target_disconnect 00:26:55.890 ************************************ 00:26:55.890 07:07:02 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:26:55.891 07:07:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:55.891 07:07:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.891 07:07:02 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:26:55.891 00:26:55.891 real 20m43.608s 00:26:55.891 user 58m35.562s 00:26:55.891 sys 4m56.774s 00:26:55.891 07:07:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.891 07:07:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.891 ************************************ 00:26:55.891 END TEST nvmf_tcp 00:26:55.891 ************************************ 00:26:55.891 07:07:02 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:26:55.891 07:07:02 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:55.891 07:07:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:55.891 07:07:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:55.891 07:07:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.891 ************************************ 00:26:55.891 START TEST spdkcli_nvmf_tcp 00:26:55.891 ************************************ 00:26:55.891 07:07:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:55.891 * Looking for test storage... 00:26:55.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:55.891 07:07:02 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:55.891 07:07:02 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:55.891 07:07:02 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:55.891 07:07:02 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.891 07:07:02 -- nvmf/common.sh@7 -- # uname -s 00:26:55.891 07:07:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.891 07:07:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.891 07:07:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.891 07:07:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.891 07:07:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.891 07:07:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.891 07:07:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.891 07:07:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.891 07:07:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.891 07:07:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.891 07:07:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:55.891 07:07:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:55.891 07:07:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.891 07:07:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.891 07:07:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.891 07:07:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.891 07:07:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.891 07:07:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.891 07:07:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.891 07:07:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.891 07:07:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.891 07:07:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.891 07:07:02 -- paths/export.sh@5 -- # export PATH 00:26:55.891 07:07:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.891 07:07:02 -- nvmf/common.sh@46 -- # : 0 00:26:55.891 07:07:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:55.891 07:07:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:55.891 07:07:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:55.891 07:07:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.891 07:07:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.891 07:07:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:55.891 07:07:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:55.891 07:07:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:55.891 07:07:02 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:55.891 07:07:02 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:55.891 07:07:02 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:55.891 07:07:02 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:55.891 07:07:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:55.891 07:07:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.891 07:07:02 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:55.891 07:07:02 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3149021 00:26:55.891 07:07:02 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:55.891 07:07:02 -- spdkcli/common.sh@34 -- # waitforlisten 3149021 00:26:55.891 07:07:02 -- common/autotest_common.sh@819 -- # '[' -z 3149021 ']' 00:26:55.891 07:07:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.891 07:07:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:55.891 07:07:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.891 07:07:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:55.891 07:07:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.891 [2024-05-12 07:07:02.596193] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:55.891 [2024-05-12 07:07:02.596288] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3149021 ] 00:26:55.891 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.891 [2024-05-12 07:07:02.657394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:55.891 [2024-05-12 07:07:02.772368] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:55.891 [2024-05-12 07:07:02.772599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.891 [2024-05-12 07:07:02.772605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.457 07:07:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:56.457 07:07:03 -- common/autotest_common.sh@852 -- # return 0 00:26:56.457 07:07:03 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:56.457 07:07:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:56.457 07:07:03 -- common/autotest_common.sh@10 -- # set +x 00:26:56.457 07:07:03 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:56.457 07:07:03 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:56.457 07:07:03 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:56.457 07:07:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:56.457 07:07:03 -- common/autotest_common.sh@10 -- # set +x 00:26:56.457 07:07:03 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:56.457 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:56.457 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:56.457 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:56.457 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:56.457 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:56.457 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:56.457 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:56.457 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:56.457 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:56.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:56.457 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:56.457 ' 00:26:57.024 [2024-05-12 07:07:03.952236] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:59.555 [2024-05-12 07:07:06.100793] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.493 [2024-05-12 07:07:07.325209] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:03.029 [2024-05-12 07:07:09.616404] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:04.930 [2024-05-12 07:07:11.590983] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:06.305 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:06.305 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:06.305 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:06.305 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:06.305 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:06.305 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:06.305 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:06.305 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:06.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:06.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:06.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:06.305 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:06.306 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:06.306 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:06.306 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:06.306 07:07:13 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:06.306 07:07:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:06.306 07:07:13 -- common/autotest_common.sh@10 -- # set +x 00:27:06.306 07:07:13 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:06.306 07:07:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:06.306 07:07:13 -- common/autotest_common.sh@10 -- # set +x 00:27:06.306 07:07:13 -- spdkcli/nvmf.sh@69 -- # check_match 00:27:06.306 07:07:13 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:06.563 07:07:13 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:06.563 07:07:13 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:06.563 07:07:13 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:06.563 07:07:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:06.563 07:07:13 -- common/autotest_common.sh@10 -- # set +x 00:27:06.563 07:07:13 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:06.563 07:07:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:06.563 07:07:13 -- common/autotest_common.sh@10 -- # set +x 00:27:06.563 07:07:13 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:06.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:06.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:06.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:06.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:06.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:06.563 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:06.563 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:06.563 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:06.563 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:06.563 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:06.563 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:06.563 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:06.563 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:06.563 ' 00:27:11.831 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:11.831 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:11.831 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:11.831 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:11.831 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:11.831 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:11.831 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:11.831 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:11.831 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:11.831 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:11.831 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:11.831 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:11.831 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:11.831 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:11.831 07:07:18 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:11.832 07:07:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:11.832 07:07:18 -- common/autotest_common.sh@10 -- # set +x 00:27:11.832 07:07:18 -- spdkcli/nvmf.sh@90 -- # killprocess 3149021 00:27:11.832 07:07:18 -- common/autotest_common.sh@926 -- # '[' -z 3149021 ']' 00:27:11.832 07:07:18 -- common/autotest_common.sh@930 -- # kill -0 3149021 00:27:11.832 07:07:18 -- common/autotest_common.sh@931 -- # uname 00:27:11.832 07:07:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:11.832 07:07:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3149021 00:27:12.090 07:07:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:12.090 07:07:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:12.090 07:07:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3149021' 00:27:12.090 killing process with pid 3149021 00:27:12.090 07:07:18 -- common/autotest_common.sh@945 -- # kill 3149021 00:27:12.090 [2024-05-12 07:07:18.975930] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:12.090 07:07:18 -- common/autotest_common.sh@950 -- # wait 3149021 00:27:12.349 07:07:19 -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:12.349 07:07:19 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:12.349 07:07:19 -- spdkcli/common.sh@13 -- # '[' -n 3149021 ']' 00:27:12.349 07:07:19 -- spdkcli/common.sh@14 -- # killprocess 3149021 00:27:12.349 07:07:19 -- common/autotest_common.sh@926 -- # '[' -z 3149021 ']' 00:27:12.349 07:07:19 -- common/autotest_common.sh@930 -- # kill -0 3149021 00:27:12.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3149021) - No such process 00:27:12.349 07:07:19 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3149021 is not found' 00:27:12.349 Process with pid 3149021 is not found 00:27:12.349 07:07:19 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:12.349 07:07:19 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:12.349 07:07:19 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:12.349 00:27:12.349 real 0m16.741s 00:27:12.349 user 0m35.451s 00:27:12.349 sys 0m0.841s 00:27:12.349 07:07:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.349 07:07:19 -- common/autotest_common.sh@10 -- # set +x 00:27:12.349 ************************************ 00:27:12.349 END TEST spdkcli_nvmf_tcp 00:27:12.349 ************************************ 00:27:12.350 07:07:19 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:12.350 07:07:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:12.350 07:07:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:12.350 07:07:19 -- common/autotest_common.sh@10 -- # set +x 00:27:12.350 ************************************ 00:27:12.350 START TEST nvmf_identify_passthru 00:27:12.350 ************************************ 00:27:12.350 07:07:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:12.350 * Looking for test storage... 00:27:12.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:12.350 07:07:19 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.350 07:07:19 -- nvmf/common.sh@7 -- # uname -s 00:27:12.350 07:07:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.350 07:07:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.350 07:07:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.350 07:07:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.350 07:07:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.350 07:07:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.350 07:07:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.350 07:07:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.350 07:07:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.350 07:07:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.350 07:07:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:12.350 07:07:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:12.350 07:07:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.350 07:07:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.350 07:07:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.350 07:07:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.350 07:07:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.350 07:07:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.350 07:07:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.350 07:07:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.350 07:07:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.350 07:07:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.350 07:07:19 -- paths/export.sh@5 -- # export PATH 00:27:12.350 07:07:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.350 07:07:19 -- nvmf/common.sh@46 -- # : 0 00:27:12.350 07:07:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:12.350 07:07:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:12.350 07:07:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:12.350 07:07:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.350 07:07:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.350 07:07:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:12.350 07:07:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:12.350 07:07:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:12.350 07:07:19 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.350 07:07:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.350 07:07:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.350 07:07:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.350 07:07:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.350 07:07:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.350 07:07:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.350 07:07:19 -- paths/export.sh@5 -- # export PATH 00:27:12.350 07:07:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.350 07:07:19 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:12.350 07:07:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:12.350 07:07:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.350 07:07:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:12.350 07:07:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:12.350 07:07:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:12.350 07:07:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.350 07:07:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:12.350 07:07:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.350 07:07:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:12.350 07:07:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:12.350 07:07:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:12.350 07:07:19 -- common/autotest_common.sh@10 -- # set +x 00:27:14.299 07:07:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:14.299 07:07:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:14.299 07:07:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:14.299 07:07:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:14.299 07:07:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:14.299 07:07:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:14.299 07:07:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:14.299 07:07:21 -- nvmf/common.sh@294 -- # net_devs=() 00:27:14.299 07:07:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:14.299 07:07:21 -- nvmf/common.sh@295 -- # e810=() 00:27:14.299 07:07:21 -- nvmf/common.sh@295 -- # local -ga e810 00:27:14.299 07:07:21 -- nvmf/common.sh@296 -- # x722=() 00:27:14.299 07:07:21 -- nvmf/common.sh@296 -- # local -ga x722 00:27:14.299 07:07:21 -- nvmf/common.sh@297 -- # mlx=() 00:27:14.299 07:07:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:14.299 07:07:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.299 07:07:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.299 07:07:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.299 07:07:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.299 07:07:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.299 07:07:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.299 07:07:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.299 07:07:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.299 07:07:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.299 07:07:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.299 07:07:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.299 07:07:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:14.299 07:07:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:14.299 07:07:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:14.299 07:07:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:14.299 07:07:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:14.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:14.299 07:07:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:14.299 07:07:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:14.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:14.299 07:07:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:14.299 07:07:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:14.299 07:07:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:14.300 07:07:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.300 07:07:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:14.300 07:07:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.300 07:07:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:14.300 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:14.300 07:07:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.300 07:07:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:14.300 07:07:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.300 07:07:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:14.300 07:07:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.300 07:07:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:14.300 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:14.300 07:07:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.300 07:07:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:14.300 07:07:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:14.300 07:07:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:14.300 07:07:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:14.300 07:07:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:14.300 07:07:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.300 07:07:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.300 07:07:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.300 07:07:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:14.300 07:07:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.300 07:07:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.300 07:07:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:14.300 07:07:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.300 07:07:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.300 07:07:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:14.300 07:07:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:14.300 07:07:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.300 07:07:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.561 07:07:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.561 07:07:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.561 07:07:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:14.561 07:07:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.561 07:07:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.561 07:07:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.561 07:07:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:14.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:27:14.561 00:27:14.561 --- 10.0.0.2 ping statistics --- 00:27:14.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.561 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:14.561 07:07:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:27:14.561 00:27:14.561 --- 10.0.0.1 ping statistics --- 00:27:14.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.561 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:27:14.561 07:07:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.561 07:07:21 -- nvmf/common.sh@410 -- # return 0 00:27:14.561 07:07:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:14.561 07:07:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.561 07:07:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:14.561 07:07:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:14.561 07:07:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.561 07:07:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:14.561 07:07:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:14.561 07:07:21 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:14.561 07:07:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:14.561 07:07:21 -- common/autotest_common.sh@10 -- # set +x 00:27:14.561 07:07:21 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:14.561 07:07:21 -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:14.561 07:07:21 -- common/autotest_common.sh@1509 -- # local bdfs 00:27:14.561 07:07:21 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:14.561 07:07:21 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:14.561 07:07:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:14.561 07:07:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:14.561 07:07:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:14.561 07:07:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:14.561 07:07:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:14.561 07:07:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:14.561 07:07:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:27:14.561 07:07:21 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:27:14.561 07:07:21 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:27:14.561 07:07:21 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:27:14.561 07:07:21 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:14.561 07:07:21 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:14.561 07:07:21 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:14.561 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.759 07:07:25 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:27:18.759 07:07:25 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:18.759 07:07:25 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:18.759 07:07:25 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:18.759 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.954 07:07:29 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:22.954 07:07:29 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:22.954 07:07:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:22.954 07:07:29 -- common/autotest_common.sh@10 -- # set +x 00:27:22.954 07:07:30 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:22.954 07:07:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:22.954 07:07:30 -- common/autotest_common.sh@10 -- # set +x 00:27:22.954 07:07:30 -- target/identify_passthru.sh@31 -- # nvmfpid=3153751 00:27:22.954 07:07:30 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:22.954 07:07:30 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:22.954 07:07:30 -- target/identify_passthru.sh@35 -- # waitforlisten 3153751 00:27:22.954 07:07:30 -- common/autotest_common.sh@819 -- # '[' -z 3153751 ']' 00:27:22.954 07:07:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.954 07:07:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:22.954 07:07:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.954 07:07:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:22.954 07:07:30 -- common/autotest_common.sh@10 -- # set +x 00:27:22.954 [2024-05-12 07:07:30.067644] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:22.954 [2024-05-12 07:07:30.067754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.212 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.212 [2024-05-12 07:07:30.136783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:23.212 [2024-05-12 07:07:30.243986] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:23.212 [2024-05-12 07:07:30.244132] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.212 [2024-05-12 07:07:30.244149] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.212 [2024-05-12 07:07:30.244168] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.212 [2024-05-12 07:07:30.244316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.212 [2024-05-12 07:07:30.244341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:23.212 [2024-05-12 07:07:30.244396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:23.212 [2024-05-12 07:07:30.244398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.212 07:07:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:23.212 07:07:30 -- common/autotest_common.sh@852 -- # return 0 00:27:23.212 07:07:30 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:23.212 07:07:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:23.212 07:07:30 -- common/autotest_common.sh@10 -- # set +x 00:27:23.212 INFO: Log level set to 20 00:27:23.212 INFO: Requests: 00:27:23.212 { 00:27:23.212 "jsonrpc": "2.0", 00:27:23.212 "method": "nvmf_set_config", 00:27:23.212 "id": 1, 00:27:23.213 "params": { 00:27:23.213 "admin_cmd_passthru": { 00:27:23.213 "identify_ctrlr": true 00:27:23.213 } 00:27:23.213 } 00:27:23.213 } 00:27:23.213 00:27:23.213 INFO: response: 00:27:23.213 { 00:27:23.213 "jsonrpc": "2.0", 00:27:23.213 "id": 1, 00:27:23.213 "result": true 00:27:23.213 } 00:27:23.213 00:27:23.213 07:07:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:23.213 07:07:30 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:23.213 07:07:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:23.213 07:07:30 -- common/autotest_common.sh@10 -- # set +x 00:27:23.213 INFO: Setting log level to 20 00:27:23.213 INFO: Setting log level to 20 00:27:23.213 INFO: Log level set to 20 00:27:23.213 INFO: Log level set to 20 00:27:23.213 INFO: Requests: 00:27:23.213 { 00:27:23.213 "jsonrpc": "2.0", 00:27:23.213 "method": "framework_start_init", 00:27:23.213 "id": 1 00:27:23.213 } 00:27:23.213 00:27:23.213 INFO: Requests: 00:27:23.213 { 00:27:23.213 "jsonrpc": "2.0", 00:27:23.213 "method": "framework_start_init", 00:27:23.213 "id": 1 00:27:23.213 } 00:27:23.213 00:27:23.471 [2024-05-12 07:07:30.424962] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:23.471 INFO: response: 00:27:23.471 { 00:27:23.471 "jsonrpc": "2.0", 00:27:23.471 "id": 1, 00:27:23.471 "result": true 00:27:23.471 } 00:27:23.471 00:27:23.471 INFO: response: 00:27:23.471 { 00:27:23.471 "jsonrpc": "2.0", 00:27:23.471 "id": 1, 00:27:23.471 "result": true 00:27:23.471 } 00:27:23.471 00:27:23.471 07:07:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:23.471 07:07:30 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:23.471 07:07:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:23.471 07:07:30 -- common/autotest_common.sh@10 -- # set +x 00:27:23.471 INFO: Setting log level to 40 00:27:23.471 INFO: Setting log level to 40 00:27:23.471 INFO: Setting log level to 40 00:27:23.471 [2024-05-12 07:07:30.434976] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.471 07:07:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:23.471 07:07:30 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:23.471 07:07:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:23.471 07:07:30 -- common/autotest_common.sh@10 -- # set +x 00:27:23.471 07:07:30 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:27:23.471 07:07:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:23.471 07:07:30 -- common/autotest_common.sh@10 -- # set +x 00:27:26.752 Nvme0n1 00:27:26.752 07:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.752 07:07:33 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:26.752 07:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.752 07:07:33 -- common/autotest_common.sh@10 -- # set +x 00:27:26.752 07:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.752 07:07:33 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:26.752 07:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.752 07:07:33 -- common/autotest_common.sh@10 -- # set +x 00:27:26.752 07:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.752 07:07:33 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:26.752 07:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.752 07:07:33 -- common/autotest_common.sh@10 -- # set +x 00:27:26.752 [2024-05-12 07:07:33.320987] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.752 07:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.752 07:07:33 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:26.752 07:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.752 07:07:33 -- common/autotest_common.sh@10 -- # set +x 00:27:26.752 [2024-05-12 07:07:33.328731] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:26.752 [ 00:27:26.752 { 00:27:26.752 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:26.752 "subtype": "Discovery", 00:27:26.752 "listen_addresses": [], 00:27:26.752 "allow_any_host": true, 00:27:26.752 "hosts": [] 00:27:26.752 }, 00:27:26.752 { 00:27:26.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:26.752 "subtype": "NVMe", 00:27:26.752 "listen_addresses": [ 00:27:26.752 { 00:27:26.752 "transport": "TCP", 00:27:26.752 "trtype": "TCP", 00:27:26.752 "adrfam": "IPv4", 00:27:26.752 "traddr": "10.0.0.2", 00:27:26.752 "trsvcid": "4420" 00:27:26.752 } 00:27:26.752 ], 00:27:26.752 "allow_any_host": true, 00:27:26.752 "hosts": [], 00:27:26.752 "serial_number": "SPDK00000000000001", 00:27:26.752 "model_number": "SPDK bdev Controller", 00:27:26.752 "max_namespaces": 1, 00:27:26.752 "min_cntlid": 1, 00:27:26.752 "max_cntlid": 65519, 00:27:26.752 "namespaces": [ 00:27:26.752 { 00:27:26.752 "nsid": 1, 00:27:26.752 "bdev_name": "Nvme0n1", 00:27:26.752 "name": "Nvme0n1", 00:27:26.752 "nguid": "8307CB0D038D49B2AA60E9B8A2315048", 00:27:26.752 "uuid": "8307cb0d-038d-49b2-aa60-e9b8a2315048" 00:27:26.752 } 00:27:26.752 ] 00:27:26.752 } 00:27:26.752 ] 00:27:26.752 07:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.752 07:07:33 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:26.752 07:07:33 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:26.752 07:07:33 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:26.752 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.752 07:07:33 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:27:26.752 07:07:33 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:26.752 07:07:33 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:26.752 07:07:33 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:26.752 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.752 07:07:33 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:26.752 07:07:33 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:27:26.752 07:07:33 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:26.752 07:07:33 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:26.752 07:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.752 07:07:33 -- common/autotest_common.sh@10 -- # set +x 00:27:26.752 07:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.752 07:07:33 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:26.752 07:07:33 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:26.752 07:07:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:26.752 07:07:33 -- nvmf/common.sh@116 -- # sync 00:27:26.752 07:07:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:26.752 07:07:33 -- nvmf/common.sh@119 -- # set +e 00:27:26.752 07:07:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:26.752 07:07:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:26.752 rmmod nvme_tcp 00:27:26.752 rmmod nvme_fabrics 00:27:26.752 rmmod nvme_keyring 00:27:26.752 07:07:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:26.752 07:07:33 -- nvmf/common.sh@123 -- # set -e 00:27:26.752 07:07:33 -- nvmf/common.sh@124 -- # return 0 00:27:26.752 07:07:33 -- nvmf/common.sh@477 -- # '[' -n 3153751 ']' 00:27:26.752 07:07:33 -- nvmf/common.sh@478 -- # killprocess 3153751 00:27:26.752 07:07:33 -- common/autotest_common.sh@926 -- # '[' -z 3153751 ']' 00:27:26.752 07:07:33 -- common/autotest_common.sh@930 -- # kill -0 3153751 00:27:26.752 07:07:33 -- common/autotest_common.sh@931 -- # uname 00:27:26.752 07:07:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:26.752 07:07:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3153751 00:27:26.752 07:07:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:26.752 07:07:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:26.752 07:07:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3153751' 00:27:26.752 killing process with pid 3153751 00:27:26.752 07:07:33 -- common/autotest_common.sh@945 -- # kill 3153751 00:27:26.752 [2024-05-12 07:07:33.803929] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:26.752 07:07:33 -- common/autotest_common.sh@950 -- # wait 3153751 00:27:28.654 07:07:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:28.654 07:07:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:28.654 07:07:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:28.654 07:07:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:28.654 07:07:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:28.654 07:07:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.654 07:07:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:28.654 07:07:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.560 07:07:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:30.560 00:27:30.560 real 0m18.198s 00:27:30.560 user 0m27.028s 00:27:30.560 sys 0m2.355s 00:27:30.560 07:07:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:30.560 07:07:37 -- common/autotest_common.sh@10 -- # set +x 00:27:30.560 ************************************ 00:27:30.560 END TEST nvmf_identify_passthru 00:27:30.560 ************************************ 00:27:30.560 07:07:37 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:30.560 07:07:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:30.560 07:07:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:30.560 07:07:37 -- common/autotest_common.sh@10 -- # set +x 00:27:30.560 ************************************ 00:27:30.560 START TEST nvmf_dif 00:27:30.560 ************************************ 00:27:30.560 07:07:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:30.560 * Looking for test storage... 00:27:30.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:30.560 07:07:37 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.560 07:07:37 -- nvmf/common.sh@7 -- # uname -s 00:27:30.560 07:07:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.560 07:07:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.560 07:07:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.560 07:07:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.560 07:07:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.560 07:07:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.560 07:07:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.560 07:07:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.560 07:07:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.560 07:07:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.560 07:07:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:30.560 07:07:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:30.560 07:07:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.560 07:07:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.560 07:07:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.560 07:07:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.560 07:07:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.560 07:07:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.560 07:07:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.560 07:07:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.560 07:07:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.560 07:07:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.560 07:07:37 -- paths/export.sh@5 -- # export PATH 00:27:30.560 07:07:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.560 07:07:37 -- nvmf/common.sh@46 -- # : 0 00:27:30.560 07:07:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:30.560 07:07:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:30.560 07:07:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:30.560 07:07:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.560 07:07:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.560 07:07:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:30.560 07:07:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:30.560 07:07:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:30.560 07:07:37 -- target/dif.sh@15 -- # NULL_META=16 00:27:30.560 07:07:37 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:30.560 07:07:37 -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:30.560 07:07:37 -- target/dif.sh@15 -- # NULL_DIF=1 00:27:30.560 07:07:37 -- target/dif.sh@135 -- # nvmftestinit 00:27:30.560 07:07:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:30.560 07:07:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.560 07:07:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:30.560 07:07:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:30.560 07:07:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:30.560 07:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.560 07:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:30.560 07:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.560 07:07:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:30.560 07:07:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:30.560 07:07:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:30.560 07:07:37 -- common/autotest_common.sh@10 -- # set +x 00:27:32.465 07:07:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:32.465 07:07:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:32.465 07:07:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:32.465 07:07:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:32.465 07:07:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:32.465 07:07:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:32.465 07:07:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:32.465 07:07:39 -- nvmf/common.sh@294 -- # net_devs=() 00:27:32.465 07:07:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:32.465 07:07:39 -- nvmf/common.sh@295 -- # e810=() 00:27:32.465 07:07:39 -- nvmf/common.sh@295 -- # local -ga e810 00:27:32.465 07:07:39 -- nvmf/common.sh@296 -- # x722=() 00:27:32.465 07:07:39 -- nvmf/common.sh@296 -- # local -ga x722 00:27:32.465 07:07:39 -- nvmf/common.sh@297 -- # mlx=() 00:27:32.465 07:07:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:32.465 07:07:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.465 07:07:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.465 07:07:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.465 07:07:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.465 07:07:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.465 07:07:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.465 07:07:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.465 07:07:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.465 07:07:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.465 07:07:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.465 07:07:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.465 07:07:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:32.465 07:07:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:32.465 07:07:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:32.465 07:07:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:32.465 07:07:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:32.465 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:32.465 07:07:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:32.465 07:07:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:32.465 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:32.465 07:07:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:32.465 07:07:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:32.465 07:07:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:32.465 07:07:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.465 07:07:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:32.465 07:07:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.465 07:07:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:32.465 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:32.465 07:07:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.465 07:07:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:32.465 07:07:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.465 07:07:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:32.465 07:07:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.465 07:07:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:32.465 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:32.466 07:07:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.466 07:07:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:32.466 07:07:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:32.466 07:07:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:32.466 07:07:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:32.466 07:07:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:32.466 07:07:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.466 07:07:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.466 07:07:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.466 07:07:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:32.466 07:07:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.466 07:07:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.466 07:07:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:32.466 07:07:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.466 07:07:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.466 07:07:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:32.466 07:07:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:32.466 07:07:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.466 07:07:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.466 07:07:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.466 07:07:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.466 07:07:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:32.466 07:07:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.466 07:07:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.466 07:07:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.466 07:07:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:32.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:27:32.466 00:27:32.466 --- 10.0.0.2 ping statistics --- 00:27:32.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.466 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:27:32.466 07:07:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:27:32.466 00:27:32.466 --- 10.0.0.1 ping statistics --- 00:27:32.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.466 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:27:32.466 07:07:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.466 07:07:39 -- nvmf/common.sh@410 -- # return 0 00:27:32.466 07:07:39 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:27:32.466 07:07:39 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:33.401 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:33.401 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:33.401 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:33.401 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:33.401 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:33.401 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:33.662 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:33.662 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:33.662 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:33.662 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:33.662 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:33.662 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:33.662 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:33.662 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:33.662 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:33.662 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:33.662 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:33.662 07:07:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.662 07:07:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:33.662 07:07:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:33.662 07:07:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.662 07:07:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:33.662 07:07:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:33.662 07:07:40 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:33.662 07:07:40 -- target/dif.sh@137 -- # nvmfappstart 00:27:33.662 07:07:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:33.662 07:07:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:33.662 07:07:40 -- common/autotest_common.sh@10 -- # set +x 00:27:33.662 07:07:40 -- nvmf/common.sh@469 -- # nvmfpid=3157039 00:27:33.662 07:07:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:33.662 07:07:40 -- nvmf/common.sh@470 -- # waitforlisten 3157039 00:27:33.662 07:07:40 -- common/autotest_common.sh@819 -- # '[' -z 3157039 ']' 00:27:33.662 07:07:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.662 07:07:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:33.662 07:07:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.662 07:07:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:33.662 07:07:40 -- common/autotest_common.sh@10 -- # set +x 00:27:33.662 [2024-05-12 07:07:40.755541] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:33.662 [2024-05-12 07:07:40.755627] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.662 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.922 [2024-05-12 07:07:40.820770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.922 [2024-05-12 07:07:40.927957] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:33.922 [2024-05-12 07:07:40.928115] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.922 [2024-05-12 07:07:40.928140] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.922 [2024-05-12 07:07:40.928153] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.922 [2024-05-12 07:07:40.928198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.859 07:07:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:34.859 07:07:41 -- common/autotest_common.sh@852 -- # return 0 00:27:34.859 07:07:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:34.859 07:07:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:34.859 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:27:34.859 07:07:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.859 07:07:41 -- target/dif.sh@139 -- # create_transport 00:27:34.859 07:07:41 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:34.859 07:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.859 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:27:34.859 [2024-05-12 07:07:41.734960] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.859 07:07:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.859 07:07:41 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:34.859 07:07:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:34.859 07:07:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:34.859 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:27:34.859 ************************************ 00:27:34.859 START TEST fio_dif_1_default 00:27:34.859 ************************************ 00:27:34.859 07:07:41 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:27:34.859 07:07:41 -- target/dif.sh@86 -- # create_subsystems 0 00:27:34.859 07:07:41 -- target/dif.sh@28 -- # local sub 00:27:34.859 07:07:41 -- target/dif.sh@30 -- # for sub in "$@" 00:27:34.859 07:07:41 -- target/dif.sh@31 -- # create_subsystem 0 00:27:34.859 07:07:41 -- target/dif.sh@18 -- # local sub_id=0 00:27:34.859 07:07:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:34.859 07:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.859 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:27:34.859 bdev_null0 00:27:34.859 07:07:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.859 07:07:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:34.859 07:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.859 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:27:34.859 07:07:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.859 07:07:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:34.859 07:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.859 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:27:34.859 07:07:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.859 07:07:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:34.859 07:07:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.859 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:27:34.859 [2024-05-12 07:07:41.775226] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.859 07:07:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.859 07:07:41 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:34.859 07:07:41 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:34.859 07:07:41 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:34.859 07:07:41 -- nvmf/common.sh@520 -- # config=() 00:27:34.859 07:07:41 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:34.859 07:07:41 -- nvmf/common.sh@520 -- # local subsystem config 00:27:34.859 07:07:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:34.859 07:07:41 -- target/dif.sh@82 -- # gen_fio_conf 00:27:34.859 07:07:41 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:34.859 07:07:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:34.859 { 00:27:34.859 "params": { 00:27:34.859 "name": "Nvme$subsystem", 00:27:34.859 "trtype": "$TEST_TRANSPORT", 00:27:34.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.859 "adrfam": "ipv4", 00:27:34.859 "trsvcid": "$NVMF_PORT", 00:27:34.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.859 "hdgst": ${hdgst:-false}, 00:27:34.859 "ddgst": ${ddgst:-false} 00:27:34.859 }, 00:27:34.859 "method": "bdev_nvme_attach_controller" 00:27:34.859 } 00:27:34.859 EOF 00:27:34.859 )") 00:27:34.859 07:07:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:34.859 07:07:41 -- target/dif.sh@54 -- # local file 00:27:34.859 07:07:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:34.859 07:07:41 -- target/dif.sh@56 -- # cat 00:27:34.859 07:07:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:34.859 07:07:41 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:34.859 07:07:41 -- common/autotest_common.sh@1320 -- # shift 00:27:34.859 07:07:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:34.859 07:07:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:34.859 07:07:41 -- nvmf/common.sh@542 -- # cat 00:27:34.859 07:07:41 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:34.859 07:07:41 -- target/dif.sh@72 -- # (( file <= files )) 00:27:34.859 07:07:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:34.859 07:07:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:34.859 07:07:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:34.859 07:07:41 -- nvmf/common.sh@544 -- # jq . 00:27:34.859 07:07:41 -- nvmf/common.sh@545 -- # IFS=, 00:27:34.859 07:07:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:34.859 "params": { 00:27:34.859 "name": "Nvme0", 00:27:34.859 "trtype": "tcp", 00:27:34.859 "traddr": "10.0.0.2", 00:27:34.860 "adrfam": "ipv4", 00:27:34.860 "trsvcid": "4420", 00:27:34.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:34.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:34.860 "hdgst": false, 00:27:34.860 "ddgst": false 00:27:34.860 }, 00:27:34.860 "method": "bdev_nvme_attach_controller" 00:27:34.860 }' 00:27:34.860 07:07:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:34.860 07:07:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:34.860 07:07:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:34.860 07:07:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:34.860 07:07:41 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:34.860 07:07:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:34.860 07:07:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:34.860 07:07:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:34.860 07:07:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:34.860 07:07:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:35.119 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:35.119 fio-3.35 00:27:35.119 Starting 1 thread 00:27:35.119 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.378 [2024-05-12 07:07:42.417151] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:35.378 [2024-05-12 07:07:42.417247] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:47.569 00:27:47.569 filename0: (groupid=0, jobs=1): err= 0: pid=3157305: Sun May 12 07:07:52 2024 00:27:47.569 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10023msec) 00:27:47.569 slat (nsec): min=6670, max=64098, avg=10173.41, stdev=5514.82 00:27:47.569 clat (usec): min=40886, max=43572, avg=41556.29, stdev=532.96 00:27:47.569 lat (usec): min=40893, max=43608, avg=41566.47, stdev=533.11 00:27:47.569 clat percentiles (usec): 00:27:47.569 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:47.569 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:27:47.569 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:47.569 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:27:47.569 | 99.99th=[43779] 00:27:47.569 bw ( KiB/s): min= 384, max= 384, per=99.81%, avg=384.00, stdev= 0.00, samples=20 00:27:47.569 iops : min= 96, max= 96, avg=96.00, stdev= 0.00, samples=20 00:27:47.569 lat (msec) : 50=100.00% 00:27:47.569 cpu : usr=90.89%, sys=8.83%, ctx=13, majf=0, minf=267 00:27:47.569 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:47.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.569 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.569 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:47.569 00:27:47.569 Run status group 0 (all jobs): 00:27:47.569 READ: bw=385KiB/s (394kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=3856KiB (3949kB), run=10023-10023msec 00:27:47.569 07:07:52 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:47.569 07:07:52 -- target/dif.sh@43 -- # local sub 00:27:47.569 07:07:52 -- target/dif.sh@45 -- # for sub in "$@" 00:27:47.569 07:07:52 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:47.569 07:07:52 -- target/dif.sh@36 -- # local sub_id=0 00:27:47.569 07:07:52 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:47.569 07:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:47.569 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.569 07:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:47.569 07:07:52 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:47.569 07:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:47.569 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.569 07:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:47.569 00:27:47.569 real 0m11.122s 00:27:47.569 user 0m10.137s 00:27:47.569 sys 0m1.147s 00:27:47.569 07:07:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.569 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.569 ************************************ 00:27:47.569 END TEST fio_dif_1_default 00:27:47.569 ************************************ 00:27:47.569 07:07:52 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:47.569 07:07:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:47.569 07:07:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:47.569 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.569 ************************************ 00:27:47.569 START TEST fio_dif_1_multi_subsystems 00:27:47.569 ************************************ 00:27:47.569 07:07:52 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:27:47.569 07:07:52 -- target/dif.sh@92 -- # local files=1 00:27:47.569 07:07:52 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:47.569 07:07:52 -- target/dif.sh@28 -- # local sub 00:27:47.569 07:07:52 -- target/dif.sh@30 -- # for sub in "$@" 00:27:47.569 07:07:52 -- target/dif.sh@31 -- # create_subsystem 0 00:27:47.569 07:07:52 -- target/dif.sh@18 -- # local sub_id=0 00:27:47.569 07:07:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:47.569 07:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:47.569 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.569 bdev_null0 00:27:47.569 07:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:47.569 07:07:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:47.569 07:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:47.569 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.569 07:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:47.569 07:07:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:47.569 07:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:47.569 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.569 07:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:47.569 07:07:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:47.569 07:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:47.569 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.569 [2024-05-12 07:07:52.917863] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.569 07:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:47.569 07:07:52 -- target/dif.sh@30 -- # for sub in "$@" 00:27:47.569 07:07:52 -- target/dif.sh@31 -- # create_subsystem 1 00:27:47.569 07:07:52 -- target/dif.sh@18 -- # local sub_id=1 00:27:47.569 07:07:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:47.569 07:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:47.569 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.569 bdev_null1 00:27:47.569 07:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:47.569 07:07:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:47.569 07:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:47.569 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.569 07:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:47.569 07:07:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:47.569 07:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:47.569 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.569 07:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:47.569 07:07:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.569 07:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:47.569 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:47.569 07:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:47.569 07:07:52 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:47.569 07:07:52 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:47.569 07:07:52 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:47.569 07:07:52 -- nvmf/common.sh@520 -- # config=() 00:27:47.569 07:07:52 -- nvmf/common.sh@520 -- # local subsystem config 00:27:47.569 07:07:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:47.570 07:07:52 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:47.570 07:07:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:47.570 { 00:27:47.570 "params": { 00:27:47.570 "name": "Nvme$subsystem", 00:27:47.570 "trtype": "$TEST_TRANSPORT", 00:27:47.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.570 "adrfam": "ipv4", 00:27:47.570 "trsvcid": "$NVMF_PORT", 00:27:47.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.570 "hdgst": ${hdgst:-false}, 00:27:47.570 "ddgst": ${ddgst:-false} 00:27:47.570 }, 00:27:47.570 "method": "bdev_nvme_attach_controller" 00:27:47.570 } 00:27:47.570 EOF 00:27:47.570 )") 00:27:47.570 07:07:52 -- target/dif.sh@82 -- # gen_fio_conf 00:27:47.570 07:07:52 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:47.570 07:07:52 -- target/dif.sh@54 -- # local file 00:27:47.570 07:07:52 -- target/dif.sh@56 -- # cat 00:27:47.570 07:07:52 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:47.570 07:07:52 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:47.570 07:07:52 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:47.570 07:07:52 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:47.570 07:07:52 -- common/autotest_common.sh@1320 -- # shift 00:27:47.570 07:07:52 -- nvmf/common.sh@542 -- # cat 00:27:47.570 07:07:52 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:47.570 07:07:52 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:47.570 07:07:52 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:47.570 07:07:52 -- target/dif.sh@72 -- # (( file <= files )) 00:27:47.570 07:07:52 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:47.570 07:07:52 -- target/dif.sh@73 -- # cat 00:27:47.570 07:07:52 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:47.570 07:07:52 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:47.570 07:07:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:47.570 07:07:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:47.570 { 00:27:47.570 "params": { 00:27:47.570 "name": "Nvme$subsystem", 00:27:47.570 "trtype": "$TEST_TRANSPORT", 00:27:47.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.570 "adrfam": "ipv4", 00:27:47.570 "trsvcid": "$NVMF_PORT", 00:27:47.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.570 "hdgst": ${hdgst:-false}, 00:27:47.570 "ddgst": ${ddgst:-false} 00:27:47.570 }, 00:27:47.570 "method": "bdev_nvme_attach_controller" 00:27:47.570 } 00:27:47.570 EOF 00:27:47.570 )") 00:27:47.570 07:07:52 -- nvmf/common.sh@542 -- # cat 00:27:47.570 07:07:52 -- target/dif.sh@72 -- # (( file++ )) 00:27:47.570 07:07:52 -- target/dif.sh@72 -- # (( file <= files )) 00:27:47.570 07:07:52 -- nvmf/common.sh@544 -- # jq . 00:27:47.570 07:07:52 -- nvmf/common.sh@545 -- # IFS=, 00:27:47.570 07:07:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:47.570 "params": { 00:27:47.570 "name": "Nvme0", 00:27:47.570 "trtype": "tcp", 00:27:47.570 "traddr": "10.0.0.2", 00:27:47.570 "adrfam": "ipv4", 00:27:47.570 "trsvcid": "4420", 00:27:47.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:47.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:47.570 "hdgst": false, 00:27:47.570 "ddgst": false 00:27:47.570 }, 00:27:47.570 "method": "bdev_nvme_attach_controller" 00:27:47.570 },{ 00:27:47.570 "params": { 00:27:47.570 "name": "Nvme1", 00:27:47.570 "trtype": "tcp", 00:27:47.570 "traddr": "10.0.0.2", 00:27:47.570 "adrfam": "ipv4", 00:27:47.570 "trsvcid": "4420", 00:27:47.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:47.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:47.570 "hdgst": false, 00:27:47.570 "ddgst": false 00:27:47.570 }, 00:27:47.570 "method": "bdev_nvme_attach_controller" 00:27:47.570 }' 00:27:47.570 07:07:52 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:47.570 07:07:52 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:47.570 07:07:52 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:47.570 07:07:52 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:47.570 07:07:52 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:47.570 07:07:52 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:47.570 07:07:52 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:47.570 07:07:52 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:47.570 07:07:52 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:47.570 07:07:52 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:47.570 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:47.570 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:47.570 fio-3.35 00:27:47.570 Starting 2 threads 00:27:47.570 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.570 [2024-05-12 07:07:53.697186] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:47.570 [2024-05-12 07:07:53.697262] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:57.537 00:27:57.537 filename0: (groupid=0, jobs=1): err= 0: pid=3158752: Sun May 12 07:08:03 2024 00:27:57.537 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10019msec) 00:27:57.537 slat (nsec): min=6978, max=31424, avg=11157.29, stdev=5285.15 00:27:57.537 clat (usec): min=40889, max=43473, avg=41537.30, stdev=521.97 00:27:57.537 lat (usec): min=40898, max=43493, avg=41548.46, stdev=522.12 00:27:57.537 clat percentiles (usec): 00:27:57.537 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:57.537 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:27:57.537 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:57.537 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:27:57.537 | 99.99th=[43254] 00:27:57.537 bw ( KiB/s): min= 352, max= 416, per=49.89%, avg=384.00, stdev=10.38, samples=20 00:27:57.537 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:27:57.537 lat (msec) : 50=100.00% 00:27:57.537 cpu : usr=95.07%, sys=4.63%, ctx=25, majf=0, minf=155 00:27:57.537 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.537 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.537 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:57.537 filename1: (groupid=0, jobs=1): err= 0: pid=3158753: Sun May 12 07:08:03 2024 00:27:57.537 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10011msec) 00:27:57.537 slat (nsec): min=6912, max=81146, avg=9303.63, stdev=4117.46 00:27:57.537 clat (usec): min=40898, max=43497, avg=41508.54, stdev=517.72 00:27:57.537 lat (usec): min=40906, max=43531, avg=41517.84, stdev=517.95 00:27:57.537 clat percentiles (usec): 00:27:57.537 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:57.537 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:27:57.537 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:57.537 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:27:57.537 | 99.99th=[43254] 00:27:57.537 bw ( KiB/s): min= 352, max= 416, per=49.89%, avg=384.00, stdev=10.38, samples=20 00:27:57.537 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:27:57.537 lat (msec) : 50=100.00% 00:27:57.537 cpu : usr=94.51%, sys=5.19%, ctx=12, majf=0, minf=209 00:27:57.537 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.537 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.537 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:57.537 00:27:57.537 Run status group 0 (all jobs): 00:27:57.537 READ: bw=770KiB/s (788kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=7712KiB (7897kB), run=10011-10019msec 00:27:57.537 07:08:04 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:57.537 07:08:04 -- target/dif.sh@43 -- # local sub 00:27:57.537 07:08:04 -- target/dif.sh@45 -- # for sub in "$@" 00:27:57.537 07:08:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:57.537 07:08:04 -- target/dif.sh@36 -- # local sub_id=0 00:27:57.537 07:08:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:57.537 07:08:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.537 07:08:04 -- common/autotest_common.sh@10 -- # set +x 00:27:57.537 07:08:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.537 07:08:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:57.537 07:08:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.537 07:08:04 -- common/autotest_common.sh@10 -- # set +x 00:27:57.537 07:08:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.537 07:08:04 -- target/dif.sh@45 -- # for sub in "$@" 00:27:57.537 07:08:04 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:57.537 07:08:04 -- target/dif.sh@36 -- # local sub_id=1 00:27:57.537 07:08:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.537 07:08:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.537 07:08:04 -- common/autotest_common.sh@10 -- # set +x 00:27:57.537 07:08:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.537 07:08:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:57.537 07:08:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.537 07:08:04 -- common/autotest_common.sh@10 -- # set +x 00:27:57.537 07:08:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.537 00:27:57.537 real 0m11.217s 00:27:57.537 user 0m20.026s 00:27:57.537 sys 0m1.289s 00:27:57.537 07:08:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.537 07:08:04 -- common/autotest_common.sh@10 -- # set +x 00:27:57.537 ************************************ 00:27:57.537 END TEST fio_dif_1_multi_subsystems 00:27:57.537 ************************************ 00:27:57.537 07:08:04 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:57.537 07:08:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:57.537 07:08:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:57.537 07:08:04 -- common/autotest_common.sh@10 -- # set +x 00:27:57.537 ************************************ 00:27:57.537 START TEST fio_dif_rand_params 00:27:57.537 ************************************ 00:27:57.537 07:08:04 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:27:57.537 07:08:04 -- target/dif.sh@100 -- # local NULL_DIF 00:27:57.537 07:08:04 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:57.537 07:08:04 -- target/dif.sh@103 -- # NULL_DIF=3 00:27:57.537 07:08:04 -- target/dif.sh@103 -- # bs=128k 00:27:57.537 07:08:04 -- target/dif.sh@103 -- # numjobs=3 00:27:57.537 07:08:04 -- target/dif.sh@103 -- # iodepth=3 00:27:57.537 07:08:04 -- target/dif.sh@103 -- # runtime=5 00:27:57.537 07:08:04 -- target/dif.sh@105 -- # create_subsystems 0 00:27:57.537 07:08:04 -- target/dif.sh@28 -- # local sub 00:27:57.537 07:08:04 -- target/dif.sh@30 -- # for sub in "$@" 00:27:57.537 07:08:04 -- target/dif.sh@31 -- # create_subsystem 0 00:27:57.537 07:08:04 -- target/dif.sh@18 -- # local sub_id=0 00:27:57.537 07:08:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:57.537 07:08:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.537 07:08:04 -- common/autotest_common.sh@10 -- # set +x 00:27:57.537 bdev_null0 00:27:57.537 07:08:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.537 07:08:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:57.537 07:08:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.537 07:08:04 -- common/autotest_common.sh@10 -- # set +x 00:27:57.537 07:08:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.537 07:08:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:57.537 07:08:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.537 07:08:04 -- common/autotest_common.sh@10 -- # set +x 00:27:57.537 07:08:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.537 07:08:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:57.537 07:08:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.537 07:08:04 -- common/autotest_common.sh@10 -- # set +x 00:27:57.537 [2024-05-12 07:08:04.162788] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.537 07:08:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.537 07:08:04 -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:57.537 07:08:04 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:57.537 07:08:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:57.537 07:08:04 -- nvmf/common.sh@520 -- # config=() 00:27:57.537 07:08:04 -- nvmf/common.sh@520 -- # local subsystem config 00:27:57.537 07:08:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:57.537 07:08:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.537 07:08:04 -- target/dif.sh@82 -- # gen_fio_conf 00:27:57.537 07:08:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:57.537 { 00:27:57.537 "params": { 00:27:57.537 "name": "Nvme$subsystem", 00:27:57.537 "trtype": "$TEST_TRANSPORT", 00:27:57.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.537 "adrfam": "ipv4", 00:27:57.537 "trsvcid": "$NVMF_PORT", 00:27:57.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.537 "hdgst": ${hdgst:-false}, 00:27:57.537 "ddgst": ${ddgst:-false} 00:27:57.538 }, 00:27:57.538 "method": "bdev_nvme_attach_controller" 00:27:57.538 } 00:27:57.538 EOF 00:27:57.538 )") 00:27:57.538 07:08:04 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.538 07:08:04 -- target/dif.sh@54 -- # local file 00:27:57.538 07:08:04 -- target/dif.sh@56 -- # cat 00:27:57.538 07:08:04 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:57.538 07:08:04 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:57.538 07:08:04 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:57.538 07:08:04 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:57.538 07:08:04 -- common/autotest_common.sh@1320 -- # shift 00:27:57.538 07:08:04 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:57.538 07:08:04 -- nvmf/common.sh@542 -- # cat 00:27:57.538 07:08:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.538 07:08:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:57.538 07:08:04 -- target/dif.sh@72 -- # (( file <= files )) 00:27:57.538 07:08:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:57.538 07:08:04 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:57.538 07:08:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:57.538 07:08:04 -- nvmf/common.sh@544 -- # jq . 00:27:57.538 07:08:04 -- nvmf/common.sh@545 -- # IFS=, 00:27:57.538 07:08:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:57.538 "params": { 00:27:57.538 "name": "Nvme0", 00:27:57.538 "trtype": "tcp", 00:27:57.538 "traddr": "10.0.0.2", 00:27:57.538 "adrfam": "ipv4", 00:27:57.538 "trsvcid": "4420", 00:27:57.538 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:57.538 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:57.538 "hdgst": false, 00:27:57.538 "ddgst": false 00:27:57.538 }, 00:27:57.538 "method": "bdev_nvme_attach_controller" 00:27:57.538 }' 00:27:57.538 07:08:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:57.538 07:08:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:57.538 07:08:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.538 07:08:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:57.538 07:08:04 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:57.538 07:08:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:57.538 07:08:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:57.538 07:08:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:57.538 07:08:04 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:57.538 07:08:04 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.538 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:57.538 ... 00:27:57.538 fio-3.35 00:27:57.538 Starting 3 threads 00:27:57.538 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.798 [2024-05-12 07:08:04.849123] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:57.798 [2024-05-12 07:08:04.849208] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:03.060 00:28:03.060 filename0: (groupid=0, jobs=1): err= 0: pid=3160191: Sun May 12 07:08:10 2024 00:28:03.060 read: IOPS=127, BW=16.0MiB/s (16.8MB/s)(80.8MiB/5050msec) 00:28:03.060 slat (usec): min=7, max=100, avg=15.36, stdev= 6.27 00:28:03.060 clat (usec): min=8365, max=62897, avg=23362.55, stdev=15876.67 00:28:03.060 lat (usec): min=8377, max=62916, avg=23377.92, stdev=15877.22 00:28:03.060 clat percentiles (usec): 00:28:03.060 | 1.00th=[ 9896], 5.00th=[11076], 10.00th=[11994], 20.00th=[13304], 00:28:03.060 | 30.00th=[14746], 40.00th=[16712], 50.00th=[17433], 60.00th=[18220], 00:28:03.060 | 70.00th=[19268], 80.00th=[21103], 90.00th=[55837], 95.00th=[57934], 00:28:03.060 | 99.00th=[61080], 99.50th=[61604], 99.90th=[62653], 99.95th=[62653], 00:28:03.060 | 99.99th=[62653] 00:28:03.060 bw ( KiB/s): min=13056, max=23040, per=31.83%, avg=16460.80, stdev=2891.40, samples=10 00:28:03.060 iops : min= 102, max= 180, avg=128.60, stdev=22.59, samples=10 00:28:03.060 lat (msec) : 10=2.17%, 20=73.53%, 50=6.35%, 100=17.96% 00:28:03.060 cpu : usr=95.19%, sys=4.40%, ctx=14, majf=0, minf=121 00:28:03.060 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.060 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.060 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:03.060 filename0: (groupid=0, jobs=1): err= 0: pid=3160192: Sun May 12 07:08:10 2024 00:28:03.060 read: IOPS=157, BW=19.7MiB/s (20.6MB/s)(98.4MiB/5006msec) 00:28:03.060 slat (nsec): min=6964, max=42216, avg=14368.61, stdev=4284.21 00:28:03.060 clat (usec): min=5512, max=94211, avg=19059.58, stdev=14439.93 00:28:03.060 lat (usec): min=5525, max=94223, avg=19073.95, stdev=14440.55 00:28:03.060 clat percentiles (usec): 00:28:03.060 | 1.00th=[ 5997], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 7046], 00:28:03.060 | 30.00th=[ 9634], 40.00th=[11076], 50.00th=[15664], 60.00th=[19530], 00:28:03.060 | 70.00th=[22152], 80.00th=[25297], 90.00th=[36439], 95.00th=[55837], 00:28:03.060 | 99.00th=[67634], 99.50th=[72877], 99.90th=[93848], 99.95th=[93848], 00:28:03.060 | 99.99th=[93848] 00:28:03.060 bw ( KiB/s): min=15360, max=27648, per=38.82%, avg=20074.10, stdev=3830.37, samples=10 00:28:03.060 iops : min= 120, max= 216, avg=156.80, stdev=29.94, samples=10 00:28:03.060 lat (msec) : 10=33.80%, 20=27.70%, 50=31.39%, 100=7.12% 00:28:03.060 cpu : usr=95.14%, sys=4.20%, ctx=17, majf=0, minf=161 00:28:03.060 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.060 issued rwts: total=787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.060 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:03.060 filename0: (groupid=0, jobs=1): err= 0: pid=3160193: Sun May 12 07:08:10 2024 00:28:03.060 read: IOPS=120, BW=15.0MiB/s (15.8MB/s)(75.9MiB/5042msec) 00:28:03.060 slat (usec): min=4, max=101, avg=17.73, stdev= 6.08 00:28:03.060 clat (usec): min=7007, max=94884, avg=24828.44, stdev=17683.37 00:28:03.060 lat (usec): min=7022, max=94897, avg=24846.16, stdev=17683.11 00:28:03.060 clat percentiles (usec): 00:28:03.060 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[10028], 20.00th=[11207], 00:28:03.060 | 30.00th=[14222], 40.00th=[17433], 50.00th=[19530], 60.00th=[20579], 00:28:03.060 | 70.00th=[21627], 80.00th=[49546], 90.00th=[57410], 95.00th=[60556], 00:28:03.060 | 99.00th=[62653], 99.50th=[63701], 99.90th=[94897], 99.95th=[94897], 00:28:03.060 | 99.99th=[94897] 00:28:03.060 bw ( KiB/s): min= 8704, max=25344, per=29.95%, avg=15485.50, stdev=4776.49, samples=10 00:28:03.060 iops : min= 68, max= 198, avg=120.90, stdev=37.38, samples=10 00:28:03.060 lat (msec) : 10=10.05%, 20=43.82%, 50=26.19%, 100=19.93% 00:28:03.060 cpu : usr=95.08%, sys=4.05%, ctx=233, majf=0, minf=220 00:28:03.060 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.060 issued rwts: total=607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.060 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:03.060 00:28:03.060 Run status group 0 (all jobs): 00:28:03.060 READ: bw=50.5MiB/s (52.9MB/s), 15.0MiB/s-19.7MiB/s (15.8MB/s-20.6MB/s), io=255MiB (267MB), run=5006-5050msec 00:28:03.318 07:08:10 -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:03.318 07:08:10 -- target/dif.sh@43 -- # local sub 00:28:03.318 07:08:10 -- target/dif.sh@45 -- # for sub in "$@" 00:28:03.318 07:08:10 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:03.318 07:08:10 -- target/dif.sh@36 -- # local sub_id=0 00:28:03.318 07:08:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:03.318 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.318 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.318 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.318 07:08:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:03.318 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.318 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.318 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.318 07:08:10 -- target/dif.sh@109 -- # NULL_DIF=2 00:28:03.318 07:08:10 -- target/dif.sh@109 -- # bs=4k 00:28:03.318 07:08:10 -- target/dif.sh@109 -- # numjobs=8 00:28:03.318 07:08:10 -- target/dif.sh@109 -- # iodepth=16 00:28:03.318 07:08:10 -- target/dif.sh@109 -- # runtime= 00:28:03.318 07:08:10 -- target/dif.sh@109 -- # files=2 00:28:03.318 07:08:10 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:03.318 07:08:10 -- target/dif.sh@28 -- # local sub 00:28:03.318 07:08:10 -- target/dif.sh@30 -- # for sub in "$@" 00:28:03.318 07:08:10 -- target/dif.sh@31 -- # create_subsystem 0 00:28:03.318 07:08:10 -- target/dif.sh@18 -- # local sub_id=0 00:28:03.318 07:08:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:03.318 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.318 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.318 bdev_null0 00:28:03.318 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.318 07:08:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:03.318 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.318 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.318 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.318 07:08:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:03.318 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.318 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.318 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.318 07:08:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:03.318 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.318 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.318 [2024-05-12 07:08:10.358025] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.318 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.318 07:08:10 -- target/dif.sh@30 -- # for sub in "$@" 00:28:03.318 07:08:10 -- target/dif.sh@31 -- # create_subsystem 1 00:28:03.318 07:08:10 -- target/dif.sh@18 -- # local sub_id=1 00:28:03.318 07:08:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:03.318 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.318 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.318 bdev_null1 00:28:03.318 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.318 07:08:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:03.318 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.318 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.318 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.319 07:08:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:03.319 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.319 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.319 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.319 07:08:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.319 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.319 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.319 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.319 07:08:10 -- target/dif.sh@30 -- # for sub in "$@" 00:28:03.319 07:08:10 -- target/dif.sh@31 -- # create_subsystem 2 00:28:03.319 07:08:10 -- target/dif.sh@18 -- # local sub_id=2 00:28:03.319 07:08:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:03.319 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.319 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.319 bdev_null2 00:28:03.319 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.319 07:08:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:03.319 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.319 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.319 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.319 07:08:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:03.319 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.319 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.319 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.319 07:08:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:03.319 07:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.319 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.319 07:08:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.319 07:08:10 -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:03.319 07:08:10 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:03.319 07:08:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:03.319 07:08:10 -- nvmf/common.sh@520 -- # config=() 00:28:03.319 07:08:10 -- nvmf/common.sh@520 -- # local subsystem config 00:28:03.319 07:08:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:03.319 07:08:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.319 07:08:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:03.319 { 00:28:03.319 "params": { 00:28:03.319 "name": "Nvme$subsystem", 00:28:03.319 "trtype": "$TEST_TRANSPORT", 00:28:03.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.319 "adrfam": "ipv4", 00:28:03.319 "trsvcid": "$NVMF_PORT", 00:28:03.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.319 "hdgst": ${hdgst:-false}, 00:28:03.319 "ddgst": ${ddgst:-false} 00:28:03.319 }, 00:28:03.319 "method": "bdev_nvme_attach_controller" 00:28:03.319 } 00:28:03.319 EOF 00:28:03.319 )") 00:28:03.319 07:08:10 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.319 07:08:10 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:03.319 07:08:10 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:03.319 07:08:10 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:03.319 07:08:10 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:03.319 07:08:10 -- target/dif.sh@82 -- # gen_fio_conf 00:28:03.319 07:08:10 -- common/autotest_common.sh@1320 -- # shift 00:28:03.319 07:08:10 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:03.319 07:08:10 -- target/dif.sh@54 -- # local file 00:28:03.319 07:08:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:03.319 07:08:10 -- target/dif.sh@56 -- # cat 00:28:03.319 07:08:10 -- nvmf/common.sh@542 -- # cat 00:28:03.319 07:08:10 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:03.319 07:08:10 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:03.319 07:08:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:03.319 07:08:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:03.319 07:08:10 -- target/dif.sh@72 -- # (( file <= files )) 00:28:03.319 07:08:10 -- target/dif.sh@73 -- # cat 00:28:03.319 07:08:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:03.319 07:08:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:03.319 { 00:28:03.319 "params": { 00:28:03.319 "name": "Nvme$subsystem", 00:28:03.319 "trtype": "$TEST_TRANSPORT", 00:28:03.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.319 "adrfam": "ipv4", 00:28:03.319 "trsvcid": "$NVMF_PORT", 00:28:03.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.319 "hdgst": ${hdgst:-false}, 00:28:03.319 "ddgst": ${ddgst:-false} 00:28:03.319 }, 00:28:03.319 "method": "bdev_nvme_attach_controller" 00:28:03.319 } 00:28:03.319 EOF 00:28:03.319 )") 00:28:03.319 07:08:10 -- nvmf/common.sh@542 -- # cat 00:28:03.319 07:08:10 -- target/dif.sh@72 -- # (( file++ )) 00:28:03.319 07:08:10 -- target/dif.sh@72 -- # (( file <= files )) 00:28:03.319 07:08:10 -- target/dif.sh@73 -- # cat 00:28:03.319 07:08:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:03.319 07:08:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:03.319 { 00:28:03.319 "params": { 00:28:03.319 "name": "Nvme$subsystem", 00:28:03.319 "trtype": "$TEST_TRANSPORT", 00:28:03.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.319 "adrfam": "ipv4", 00:28:03.319 "trsvcid": "$NVMF_PORT", 00:28:03.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.319 "hdgst": ${hdgst:-false}, 00:28:03.319 "ddgst": ${ddgst:-false} 00:28:03.319 }, 00:28:03.319 "method": "bdev_nvme_attach_controller" 00:28:03.319 } 00:28:03.319 EOF 00:28:03.319 )") 00:28:03.319 07:08:10 -- target/dif.sh@72 -- # (( file++ )) 00:28:03.319 07:08:10 -- target/dif.sh@72 -- # (( file <= files )) 00:28:03.319 07:08:10 -- nvmf/common.sh@542 -- # cat 00:28:03.319 07:08:10 -- nvmf/common.sh@544 -- # jq . 00:28:03.319 07:08:10 -- nvmf/common.sh@545 -- # IFS=, 00:28:03.319 07:08:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:03.319 "params": { 00:28:03.319 "name": "Nvme0", 00:28:03.319 "trtype": "tcp", 00:28:03.319 "traddr": "10.0.0.2", 00:28:03.319 "adrfam": "ipv4", 00:28:03.319 "trsvcid": "4420", 00:28:03.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:03.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:03.319 "hdgst": false, 00:28:03.319 "ddgst": false 00:28:03.319 }, 00:28:03.319 "method": "bdev_nvme_attach_controller" 00:28:03.319 },{ 00:28:03.319 "params": { 00:28:03.319 "name": "Nvme1", 00:28:03.319 "trtype": "tcp", 00:28:03.319 "traddr": "10.0.0.2", 00:28:03.319 "adrfam": "ipv4", 00:28:03.319 "trsvcid": "4420", 00:28:03.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.319 "hdgst": false, 00:28:03.319 "ddgst": false 00:28:03.319 }, 00:28:03.319 "method": "bdev_nvme_attach_controller" 00:28:03.319 },{ 00:28:03.319 "params": { 00:28:03.319 "name": "Nvme2", 00:28:03.319 "trtype": "tcp", 00:28:03.319 "traddr": "10.0.0.2", 00:28:03.319 "adrfam": "ipv4", 00:28:03.319 "trsvcid": "4420", 00:28:03.319 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:03.319 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:03.319 "hdgst": false, 00:28:03.319 "ddgst": false 00:28:03.319 }, 00:28:03.319 "method": "bdev_nvme_attach_controller" 00:28:03.319 }' 00:28:03.578 07:08:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:03.578 07:08:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:03.578 07:08:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:03.578 07:08:10 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:03.578 07:08:10 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:03.578 07:08:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:03.578 07:08:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:03.578 07:08:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:03.578 07:08:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:03.578 07:08:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.578 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:03.578 ... 00:28:03.578 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:03.578 ... 00:28:03.578 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:03.578 ... 00:28:03.578 fio-3.35 00:28:03.578 Starting 24 threads 00:28:03.838 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.406 [2024-05-12 07:08:11.382085] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:04.406 [2024-05-12 07:08:11.382149] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:16.610 00:28:16.610 filename0: (groupid=0, jobs=1): err= 0: pid=3161079: Sun May 12 07:08:21 2024 00:28:16.610 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10101msec) 00:28:16.610 slat (nsec): min=8175, max=51426, avg=26934.13, stdev=10170.31 00:28:16.610 clat (msec): min=200, max=417, avg=325.57, stdev=27.68 00:28:16.610 lat (msec): min=200, max=417, avg=325.59, stdev=27.68 00:28:16.610 clat percentiles (msec): 00:28:16.610 | 1.00th=[ 257], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 313], 00:28:16.610 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.610 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 380], 00:28:16.610 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:28:16.610 | 99.99th=[ 418] 00:28:16.610 bw ( KiB/s): min= 128, max= 256, per=3.72%, avg=192.00, stdev=62.72, samples=20 00:28:16.610 iops : min= 32, max= 64, avg=48.00, stdev=15.68, samples=20 00:28:16.610 lat (msec) : 250=0.40%, 500=99.60% 00:28:16.610 cpu : usr=98.93%, sys=0.66%, ctx=13, majf=0, minf=9 00:28:16.610 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:28:16.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.610 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.610 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.610 filename0: (groupid=0, jobs=1): err= 0: pid=3161080: Sun May 12 07:08:21 2024 00:28:16.610 read: IOPS=70, BW=281KiB/s (288kB/s)(2816KiB/10028msec) 00:28:16.610 slat (nsec): min=6290, max=84105, avg=14873.74, stdev=11428.39 00:28:16.610 clat (msec): min=160, max=401, avg=227.74, stdev=44.50 00:28:16.610 lat (msec): min=160, max=401, avg=227.76, stdev=44.50 00:28:16.610 clat percentiles (msec): 00:28:16.610 | 1.00th=[ 161], 5.00th=[ 171], 10.00th=[ 178], 20.00th=[ 184], 00:28:16.610 | 30.00th=[ 190], 40.00th=[ 205], 50.00th=[ 218], 60.00th=[ 243], 00:28:16.610 | 70.00th=[ 264], 80.00th=[ 264], 90.00th=[ 284], 95.00th=[ 309], 00:28:16.610 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 401], 99.95th=[ 401], 00:28:16.610 | 99.99th=[ 401] 00:28:16.610 bw ( KiB/s): min= 128, max= 384, per=5.33%, avg=275.20, stdev=75.15, samples=20 00:28:16.610 iops : min= 32, max= 96, avg=68.80, stdev=18.79, samples=20 00:28:16.610 lat (msec) : 250=61.65%, 500=38.35% 00:28:16.610 cpu : usr=98.59%, sys=0.97%, ctx=22, majf=0, minf=9 00:28:16.610 IO depths : 1=5.8%, 2=11.9%, 4=24.6%, 8=51.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:16.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.610 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.610 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.610 filename0: (groupid=0, jobs=1): err= 0: pid=3161081: Sun May 12 07:08:21 2024 00:28:16.610 read: IOPS=49, BW=197KiB/s (201kB/s)(1984KiB/10094msec) 00:28:16.610 slat (usec): min=4, max=148, avg=33.09, stdev=15.75 00:28:16.610 clat (msec): min=252, max=475, avg=325.29, stdev=33.92 00:28:16.610 lat (msec): min=253, max=475, avg=325.32, stdev=33.92 00:28:16.610 clat percentiles (msec): 00:28:16.610 | 1.00th=[ 253], 5.00th=[ 284], 10.00th=[ 296], 20.00th=[ 309], 00:28:16.610 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.610 | 70.00th=[ 334], 80.00th=[ 342], 90.00th=[ 342], 95.00th=[ 355], 00:28:16.610 | 99.00th=[ 477], 99.50th=[ 477], 99.90th=[ 477], 99.95th=[ 477], 00:28:16.610 | 99.99th=[ 477] 00:28:16.610 bw ( KiB/s): min= 128, max= 256, per=3.72%, avg=192.00, stdev=65.66, samples=20 00:28:16.610 iops : min= 32, max= 64, avg=48.00, stdev=16.42, samples=20 00:28:16.610 lat (msec) : 500=100.00% 00:28:16.610 cpu : usr=97.22%, sys=1.54%, ctx=56, majf=0, minf=9 00:28:16.610 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:16.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.610 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.610 filename0: (groupid=0, jobs=1): err= 0: pid=3161082: Sun May 12 07:08:21 2024 00:28:16.610 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10126msec) 00:28:16.610 slat (usec): min=8, max=126, avg=27.20, stdev=12.56 00:28:16.610 clat (msec): min=174, max=449, avg=315.51, stdev=46.13 00:28:16.610 lat (msec): min=174, max=449, avg=315.54, stdev=46.13 00:28:16.610 clat percentiles (msec): 00:28:16.610 | 1.00th=[ 176], 5.00th=[ 209], 10.00th=[ 253], 20.00th=[ 296], 00:28:16.610 | 30.00th=[ 309], 40.00th=[ 317], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.610 | 70.00th=[ 326], 80.00th=[ 342], 90.00th=[ 351], 95.00th=[ 397], 00:28:16.610 | 99.00th=[ 447], 99.50th=[ 447], 99.90th=[ 451], 99.95th=[ 451], 00:28:16.610 | 99.99th=[ 451] 00:28:16.610 bw ( KiB/s): min= 128, max= 256, per=3.84%, avg=198.40, stdev=59.28, samples=20 00:28:16.610 iops : min= 32, max= 64, avg=49.60, stdev=14.82, samples=20 00:28:16.610 lat (msec) : 250=8.20%, 500=91.80% 00:28:16.610 cpu : usr=96.94%, sys=1.70%, ctx=151, majf=0, minf=9 00:28:16.610 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:28:16.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.610 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.610 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.610 filename0: (groupid=0, jobs=1): err= 0: pid=3161083: Sun May 12 07:08:21 2024 00:28:16.610 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10119msec) 00:28:16.610 slat (usec): min=3, max=100, avg=29.02, stdev=12.55 00:28:16.610 clat (msec): min=185, max=477, avg=326.13, stdev=53.11 00:28:16.610 lat (msec): min=185, max=477, avg=326.15, stdev=53.11 00:28:16.610 clat percentiles (msec): 00:28:16.610 | 1.00th=[ 192], 5.00th=[ 226], 10.00th=[ 284], 20.00th=[ 309], 00:28:16.610 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.610 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 447], 00:28:16.610 | 99.00th=[ 472], 99.50th=[ 477], 99.90th=[ 477], 99.95th=[ 477], 00:28:16.610 | 99.99th=[ 477] 00:28:16.610 bw ( KiB/s): min= 128, max= 256, per=3.72%, avg=192.00, stdev=61.20, samples=20 00:28:16.610 iops : min= 32, max= 64, avg=48.00, stdev=15.30, samples=20 00:28:16.610 lat (msec) : 250=8.87%, 500=91.13% 00:28:16.610 cpu : usr=98.13%, sys=1.19%, ctx=116, majf=0, minf=9 00:28:16.610 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:16.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.610 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.610 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.611 filename0: (groupid=0, jobs=1): err= 0: pid=3161084: Sun May 12 07:08:21 2024 00:28:16.611 read: IOPS=50, BW=203KiB/s (208kB/s)(2048KiB/10100msec) 00:28:16.611 slat (usec): min=6, max=111, avg=18.69, stdev=13.25 00:28:16.611 clat (msec): min=174, max=442, avg=313.12, stdev=41.35 00:28:16.611 lat (msec): min=174, max=442, avg=313.14, stdev=41.35 00:28:16.611 clat percentiles (msec): 00:28:16.611 | 1.00th=[ 176], 5.00th=[ 234], 10.00th=[ 264], 20.00th=[ 305], 00:28:16.611 | 30.00th=[ 313], 40.00th=[ 317], 50.00th=[ 317], 60.00th=[ 326], 00:28:16.611 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 376], 00:28:16.611 | 99.00th=[ 397], 99.50th=[ 435], 99.90th=[ 443], 99.95th=[ 443], 00:28:16.611 | 99.99th=[ 443] 00:28:16.611 bw ( KiB/s): min= 128, max= 256, per=3.84%, avg=198.40, stdev=60.85, samples=20 00:28:16.611 iops : min= 32, max= 64, avg=49.60, stdev=15.21, samples=20 00:28:16.611 lat (msec) : 250=8.20%, 500=91.80% 00:28:16.611 cpu : usr=98.05%, sys=1.13%, ctx=34, majf=0, minf=9 00:28:16.611 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:28:16.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.611 filename0: (groupid=0, jobs=1): err= 0: pid=3161085: Sun May 12 07:08:21 2024 00:28:16.611 read: IOPS=74, BW=299KiB/s (306kB/s)(3032KiB/10145msec) 00:28:16.611 slat (nsec): min=3706, max=45601, avg=13079.96, stdev=5003.87 00:28:16.611 clat (msec): min=13, max=366, avg=213.51, stdev=50.70 00:28:16.611 lat (msec): min=13, max=366, avg=213.52, stdev=50.70 00:28:16.611 clat percentiles (msec): 00:28:16.611 | 1.00th=[ 14], 5.00th=[ 136], 10.00th=[ 169], 20.00th=[ 190], 00:28:16.611 | 30.00th=[ 203], 40.00th=[ 215], 50.00th=[ 222], 60.00th=[ 226], 00:28:16.611 | 70.00th=[ 234], 80.00th=[ 239], 90.00th=[ 264], 95.00th=[ 292], 00:28:16.611 | 99.00th=[ 317], 99.50th=[ 363], 99.90th=[ 368], 99.95th=[ 368], 00:28:16.611 | 99.99th=[ 368] 00:28:16.611 bw ( KiB/s): min= 256, max= 384, per=5.74%, avg=296.80, stdev=43.89, samples=20 00:28:16.611 iops : min= 64, max= 96, avg=74.20, stdev=10.97, samples=20 00:28:16.611 lat (msec) : 20=2.11%, 100=2.11%, 250=82.59%, 500=13.19% 00:28:16.611 cpu : usr=98.22%, sys=1.17%, ctx=46, majf=0, minf=9 00:28:16.611 IO depths : 1=0.8%, 2=2.9%, 4=12.5%, 8=72.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:28:16.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 complete : 0=0.0%, 4=90.6%, 8=3.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 issued rwts: total=758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.611 filename0: (groupid=0, jobs=1): err= 0: pid=3161086: Sun May 12 07:08:21 2024 00:28:16.611 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10117msec) 00:28:16.611 slat (nsec): min=3908, max=68937, avg=32721.80, stdev=11182.02 00:28:16.611 clat (msec): min=225, max=528, avg=326.03, stdev=36.94 00:28:16.611 lat (msec): min=225, max=528, avg=326.06, stdev=36.94 00:28:16.611 clat percentiles (msec): 00:28:16.611 | 1.00th=[ 226], 5.00th=[ 284], 10.00th=[ 305], 20.00th=[ 309], 00:28:16.611 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.611 | 70.00th=[ 334], 80.00th=[ 342], 90.00th=[ 347], 95.00th=[ 355], 00:28:16.611 | 99.00th=[ 472], 99.50th=[ 472], 99.90th=[ 527], 99.95th=[ 527], 00:28:16.611 | 99.99th=[ 527] 00:28:16.611 bw ( KiB/s): min= 112, max= 256, per=3.72%, avg=192.00, stdev=65.87, samples=20 00:28:16.611 iops : min= 28, max= 64, avg=48.00, stdev=16.47, samples=20 00:28:16.611 lat (msec) : 250=3.63%, 500=95.97%, 750=0.40% 00:28:16.611 cpu : usr=98.74%, sys=0.84%, ctx=15, majf=0, minf=9 00:28:16.611 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:16.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.611 filename1: (groupid=0, jobs=1): err= 0: pid=3161087: Sun May 12 07:08:21 2024 00:28:16.611 read: IOPS=48, BW=196KiB/s (201kB/s)(1984KiB/10126msec) 00:28:16.611 slat (nsec): min=8630, max=59764, avg=25673.59, stdev=9263.83 00:28:16.611 clat (msec): min=238, max=417, avg=325.70, stdev=31.06 00:28:16.611 lat (msec): min=238, max=417, avg=325.72, stdev=31.06 00:28:16.611 clat percentiles (msec): 00:28:16.611 | 1.00th=[ 255], 5.00th=[ 268], 10.00th=[ 292], 20.00th=[ 313], 00:28:16.611 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.611 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 393], 00:28:16.611 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:28:16.611 | 99.99th=[ 418] 00:28:16.611 bw ( KiB/s): min= 128, max= 256, per=3.72%, avg=192.00, stdev=61.20, samples=20 00:28:16.611 iops : min= 32, max= 64, avg=48.00, stdev=15.30, samples=20 00:28:16.611 lat (msec) : 250=0.81%, 500=99.19% 00:28:16.611 cpu : usr=98.96%, sys=0.63%, ctx=23, majf=0, minf=9 00:28:16.611 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:16.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.611 filename1: (groupid=0, jobs=1): err= 0: pid=3161088: Sun May 12 07:08:21 2024 00:28:16.611 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10118msec) 00:28:16.611 slat (usec): min=3, max=199, avg=34.43, stdev=22.23 00:28:16.611 clat (msec): min=186, max=528, avg=326.04, stdev=54.04 00:28:16.611 lat (msec): min=186, max=528, avg=326.07, stdev=54.04 00:28:16.611 clat percentiles (msec): 00:28:16.611 | 1.00th=[ 192], 5.00th=[ 226], 10.00th=[ 284], 20.00th=[ 309], 00:28:16.611 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.611 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 447], 00:28:16.611 | 99.00th=[ 477], 99.50th=[ 477], 99.90th=[ 527], 99.95th=[ 527], 00:28:16.611 | 99.99th=[ 527] 00:28:16.611 bw ( KiB/s): min= 112, max= 256, per=3.72%, avg=192.00, stdev=61.42, samples=20 00:28:16.611 iops : min= 28, max= 64, avg=48.00, stdev=15.36, samples=20 00:28:16.611 lat (msec) : 250=9.27%, 500=90.32%, 750=0.40% 00:28:16.611 cpu : usr=97.01%, sys=1.76%, ctx=78, majf=0, minf=9 00:28:16.611 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:28:16.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.611 filename1: (groupid=0, jobs=1): err= 0: pid=3161089: Sun May 12 07:08:21 2024 00:28:16.611 read: IOPS=51, BW=204KiB/s (209kB/s)(2048KiB/10015msec) 00:28:16.611 slat (usec): min=8, max=106, avg=25.03, stdev=13.74 00:28:16.611 clat (msec): min=174, max=431, avg=312.74, stdev=40.55 00:28:16.611 lat (msec): min=174, max=431, avg=312.77, stdev=40.55 00:28:16.611 clat percentiles (msec): 00:28:16.611 | 1.00th=[ 176], 5.00th=[ 215], 10.00th=[ 255], 20.00th=[ 305], 00:28:16.611 | 30.00th=[ 309], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 326], 00:28:16.611 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 355], 00:28:16.611 | 99.00th=[ 414], 99.50th=[ 422], 99.90th=[ 430], 99.95th=[ 430], 00:28:16.611 | 99.99th=[ 430] 00:28:16.611 bw ( KiB/s): min= 128, max= 256, per=3.84%, avg=198.40, stdev=60.85, samples=20 00:28:16.611 iops : min= 32, max= 64, avg=49.60, stdev=15.21, samples=20 00:28:16.611 lat (msec) : 250=8.98%, 500=91.02% 00:28:16.611 cpu : usr=98.43%, sys=1.11%, ctx=33, majf=0, minf=9 00:28:16.611 IO depths : 1=3.5%, 2=9.2%, 4=24.0%, 8=54.3%, 16=9.0%, 32=0.0%, >=64=0.0% 00:28:16.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.611 filename1: (groupid=0, jobs=1): err= 0: pid=3161090: Sun May 12 07:08:21 2024 00:28:16.611 read: IOPS=51, BW=208KiB/s (213kB/s)(2104KiB/10138msec) 00:28:16.611 slat (usec): min=8, max=360, avg=29.69, stdev=29.62 00:28:16.611 clat (msec): min=184, max=477, avg=307.70, stdev=52.57 00:28:16.611 lat (msec): min=184, max=477, avg=307.73, stdev=52.57 00:28:16.611 clat percentiles (msec): 00:28:16.611 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 226], 20.00th=[ 284], 00:28:16.611 | 30.00th=[ 309], 40.00th=[ 317], 50.00th=[ 317], 60.00th=[ 321], 00:28:16.611 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 368], 00:28:16.611 | 99.00th=[ 451], 99.50th=[ 477], 99.90th=[ 477], 99.95th=[ 477], 00:28:16.611 | 99.99th=[ 477] 00:28:16.611 bw ( KiB/s): min= 128, max= 256, per=3.94%, avg=204.00, stdev=60.73, samples=20 00:28:16.611 iops : min= 32, max= 64, avg=51.00, stdev=15.18, samples=20 00:28:16.611 lat (msec) : 250=13.31%, 500=86.69% 00:28:16.611 cpu : usr=96.65%, sys=1.78%, ctx=35, majf=0, minf=9 00:28:16.611 IO depths : 1=3.6%, 2=9.9%, 4=25.1%, 8=52.7%, 16=8.7%, 32=0.0%, >=64=0.0% 00:28:16.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.611 filename1: (groupid=0, jobs=1): err= 0: pid=3161091: Sun May 12 07:08:21 2024 00:28:16.611 read: IOPS=49, BW=197KiB/s (201kB/s)(1984KiB/10096msec) 00:28:16.611 slat (usec): min=5, max=146, avg=28.66, stdev=13.64 00:28:16.611 clat (msec): min=200, max=532, avg=325.42, stdev=44.35 00:28:16.611 lat (msec): min=200, max=532, avg=325.45, stdev=44.35 00:28:16.611 clat percentiles (msec): 00:28:16.611 | 1.00th=[ 203], 5.00th=[ 253], 10.00th=[ 292], 20.00th=[ 309], 00:28:16.611 | 30.00th=[ 313], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.611 | 70.00th=[ 334], 80.00th=[ 342], 90.00th=[ 351], 95.00th=[ 430], 00:28:16.611 | 99.00th=[ 477], 99.50th=[ 477], 99.90th=[ 535], 99.95th=[ 535], 00:28:16.611 | 99.99th=[ 535] 00:28:16.611 bw ( KiB/s): min= 112, max= 256, per=3.72%, avg=192.00, stdev=62.94, samples=20 00:28:16.611 iops : min= 28, max= 64, avg=48.00, stdev=15.73, samples=20 00:28:16.611 lat (msec) : 250=3.23%, 500=96.37%, 750=0.40% 00:28:16.611 cpu : usr=97.73%, sys=1.41%, ctx=37, majf=0, minf=9 00:28:16.611 IO depths : 1=3.6%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:28:16.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.611 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.612 filename1: (groupid=0, jobs=1): err= 0: pid=3161092: Sun May 12 07:08:21 2024 00:28:16.612 read: IOPS=73, BW=293KiB/s (301kB/s)(2944KiB/10032msec) 00:28:16.612 slat (nsec): min=3716, max=32296, avg=10622.59, stdev=4234.27 00:28:16.612 clat (msec): min=5, max=393, avg=218.01, stdev=61.63 00:28:16.612 lat (msec): min=5, max=393, avg=218.02, stdev=61.63 00:28:16.612 clat percentiles (msec): 00:28:16.612 | 1.00th=[ 6], 5.00th=[ 122], 10.00th=[ 169], 20.00th=[ 184], 00:28:16.612 | 30.00th=[ 205], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 234], 00:28:16.612 | 70.00th=[ 243], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 321], 00:28:16.612 | 99.00th=[ 372], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:28:16.612 | 99.99th=[ 393] 00:28:16.612 bw ( KiB/s): min= 208, max= 512, per=5.58%, avg=288.00, stdev=68.08, samples=20 00:28:16.612 iops : min= 52, max= 128, avg=72.00, stdev=17.02, samples=20 00:28:16.612 lat (msec) : 10=2.17%, 20=0.95%, 50=1.22%, 250=74.46%, 500=21.20% 00:28:16.612 cpu : usr=97.76%, sys=1.47%, ctx=34, majf=0, minf=9 00:28:16.612 IO depths : 1=0.5%, 2=2.4%, 4=11.1%, 8=73.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:16.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 complete : 0=0.0%, 4=90.3%, 8=4.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.612 filename1: (groupid=0, jobs=1): err= 0: pid=3161093: Sun May 12 07:08:21 2024 00:28:16.612 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10112msec) 00:28:16.612 slat (nsec): min=7691, max=56801, avg=30812.38, stdev=10012.30 00:28:16.612 clat (msec): min=189, max=477, avg=325.92, stdev=39.67 00:28:16.612 lat (msec): min=189, max=477, avg=325.95, stdev=39.67 00:28:16.612 clat percentiles (msec): 00:28:16.612 | 1.00th=[ 218], 5.00th=[ 284], 10.00th=[ 296], 20.00th=[ 309], 00:28:16.612 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.612 | 70.00th=[ 338], 80.00th=[ 342], 90.00th=[ 347], 95.00th=[ 355], 00:28:16.612 | 99.00th=[ 468], 99.50th=[ 472], 99.90th=[ 477], 99.95th=[ 477], 00:28:16.612 | 99.99th=[ 477] 00:28:16.612 bw ( KiB/s): min= 128, max= 256, per=3.70%, avg=191.90, stdev=56.27, samples=20 00:28:16.612 iops : min= 32, max= 64, avg=47.95, stdev=14.04, samples=20 00:28:16.612 lat (msec) : 250=4.44%, 500=95.56% 00:28:16.612 cpu : usr=98.84%, sys=0.72%, ctx=13, majf=0, minf=9 00:28:16.612 IO depths : 1=1.8%, 2=8.1%, 4=25.0%, 8=54.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:28:16.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.612 filename1: (groupid=0, jobs=1): err= 0: pid=3161094: Sun May 12 07:08:21 2024 00:28:16.612 read: IOPS=48, BW=196KiB/s (201kB/s)(1984KiB/10126msec) 00:28:16.612 slat (nsec): min=10001, max=59186, avg=27392.67, stdev=9363.46 00:28:16.612 clat (msec): min=243, max=417, avg=325.70, stdev=25.36 00:28:16.612 lat (msec): min=243, max=417, avg=325.73, stdev=25.36 00:28:16.612 clat percentiles (msec): 00:28:16.612 | 1.00th=[ 262], 5.00th=[ 292], 10.00th=[ 296], 20.00th=[ 313], 00:28:16.612 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.612 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 355], 00:28:16.612 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:28:16.612 | 99.99th=[ 418] 00:28:16.612 bw ( KiB/s): min= 128, max= 256, per=3.72%, avg=192.00, stdev=56.39, samples=20 00:28:16.612 iops : min= 32, max= 64, avg=48.00, stdev=14.10, samples=20 00:28:16.612 lat (msec) : 250=0.40%, 500=99.60% 00:28:16.612 cpu : usr=99.07%, sys=0.52%, ctx=14, majf=0, minf=9 00:28:16.612 IO depths : 1=1.2%, 2=7.5%, 4=25.0%, 8=55.0%, 16=11.3%, 32=0.0%, >=64=0.0% 00:28:16.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.612 filename2: (groupid=0, jobs=1): err= 0: pid=3161095: Sun May 12 07:08:21 2024 00:28:16.612 read: IOPS=50, BW=203KiB/s (208kB/s)(2048KiB/10100msec) 00:28:16.612 slat (nsec): min=9254, max=62410, avg=30142.52, stdev=9513.45 00:28:16.612 clat (msec): min=174, max=354, avg=315.36, stdev=30.89 00:28:16.612 lat (msec): min=174, max=354, avg=315.39, stdev=30.89 00:28:16.612 clat percentiles (msec): 00:28:16.612 | 1.00th=[ 176], 5.00th=[ 266], 10.00th=[ 292], 20.00th=[ 309], 00:28:16.612 | 30.00th=[ 313], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 326], 00:28:16.612 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 342], 95.00th=[ 355], 00:28:16.612 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 355], 00:28:16.612 | 99.99th=[ 355] 00:28:16.612 bw ( KiB/s): min= 128, max= 256, per=3.84%, avg=198.40, stdev=65.33, samples=20 00:28:16.612 iops : min= 32, max= 64, avg=49.60, stdev=16.33, samples=20 00:28:16.612 lat (msec) : 250=3.52%, 500=96.48% 00:28:16.612 cpu : usr=98.91%, sys=0.68%, ctx=13, majf=0, minf=9 00:28:16.612 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:16.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.612 filename2: (groupid=0, jobs=1): err= 0: pid=3161096: Sun May 12 07:08:21 2024 00:28:16.612 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10111msec) 00:28:16.612 slat (nsec): min=7598, max=61011, avg=32502.50, stdev=8214.87 00:28:16.612 clat (msec): min=187, max=477, avg=325.82, stdev=38.87 00:28:16.612 lat (msec): min=187, max=477, avg=325.85, stdev=38.86 00:28:16.612 clat percentiles (msec): 00:28:16.612 | 1.00th=[ 226], 5.00th=[ 284], 10.00th=[ 296], 20.00th=[ 309], 00:28:16.612 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.612 | 70.00th=[ 334], 80.00th=[ 342], 90.00th=[ 347], 95.00th=[ 355], 00:28:16.612 | 99.00th=[ 464], 99.50th=[ 464], 99.90th=[ 477], 99.95th=[ 477], 00:28:16.612 | 99.99th=[ 477] 00:28:16.612 bw ( KiB/s): min= 128, max= 256, per=3.72%, avg=192.00, stdev=65.66, samples=20 00:28:16.612 iops : min= 32, max= 64, avg=48.00, stdev=16.42, samples=20 00:28:16.612 lat (msec) : 250=4.03%, 500=95.97% 00:28:16.612 cpu : usr=98.61%, sys=0.96%, ctx=17, majf=0, minf=9 00:28:16.612 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:16.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.612 filename2: (groupid=0, jobs=1): err= 0: pid=3161097: Sun May 12 07:08:21 2024 00:28:16.612 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10110msec) 00:28:16.612 slat (nsec): min=7270, max=48216, avg=13672.12, stdev=4765.54 00:28:16.612 clat (msec): min=112, max=520, avg=325.99, stdev=56.66 00:28:16.612 lat (msec): min=112, max=520, avg=326.00, stdev=56.66 00:28:16.612 clat percentiles (msec): 00:28:16.612 | 1.00th=[ 113], 5.00th=[ 305], 10.00th=[ 305], 20.00th=[ 313], 00:28:16.612 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 330], 00:28:16.612 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 368], 95.00th=[ 368], 00:28:16.612 | 99.00th=[ 523], 99.50th=[ 523], 99.90th=[ 523], 99.95th=[ 523], 00:28:16.612 | 99.99th=[ 523] 00:28:16.612 bw ( KiB/s): min= 128, max= 256, per=3.92%, avg=202.11, stdev=64.93, samples=19 00:28:16.612 iops : min= 32, max= 64, avg=50.53, stdev=16.23, samples=19 00:28:16.612 lat (msec) : 250=4.03%, 500=92.74%, 750=3.23% 00:28:16.612 cpu : usr=98.96%, sys=0.63%, ctx=15, majf=0, minf=9 00:28:16.612 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:16.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.612 filename2: (groupid=0, jobs=1): err= 0: pid=3161098: Sun May 12 07:08:21 2024 00:28:16.612 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10100msec) 00:28:16.612 slat (nsec): min=8393, max=51184, avg=24775.27, stdev=8481.24 00:28:16.612 clat (msec): min=256, max=481, avg=325.58, stdev=24.35 00:28:16.612 lat (msec): min=256, max=481, avg=325.61, stdev=24.35 00:28:16.612 clat percentiles (msec): 00:28:16.612 | 1.00th=[ 288], 5.00th=[ 292], 10.00th=[ 296], 20.00th=[ 313], 00:28:16.612 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.612 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 342], 95.00th=[ 355], 00:28:16.612 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 481], 99.95th=[ 481], 00:28:16.612 | 99.99th=[ 481] 00:28:16.612 bw ( KiB/s): min= 128, max= 256, per=3.72%, avg=192.00, stdev=65.66, samples=20 00:28:16.612 iops : min= 32, max= 64, avg=48.00, stdev=16.42, samples=20 00:28:16.612 lat (msec) : 500=100.00% 00:28:16.612 cpu : usr=98.75%, sys=0.85%, ctx=13, majf=0, minf=9 00:28:16.612 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:16.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.612 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.612 filename2: (groupid=0, jobs=1): err= 0: pid=3161099: Sun May 12 07:08:21 2024 00:28:16.612 read: IOPS=71, BW=287KiB/s (294kB/s)(2880KiB/10029msec) 00:28:16.612 slat (nsec): min=5329, max=73138, avg=12042.81, stdev=5836.18 00:28:16.612 clat (msec): min=120, max=366, avg=222.76, stdev=45.22 00:28:16.612 lat (msec): min=120, max=366, avg=222.77, stdev=45.22 00:28:16.612 clat percentiles (msec): 00:28:16.612 | 1.00th=[ 122], 5.00th=[ 171], 10.00th=[ 176], 20.00th=[ 184], 00:28:16.612 | 30.00th=[ 190], 40.00th=[ 203], 50.00th=[ 218], 60.00th=[ 234], 00:28:16.612 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 292], 00:28:16.612 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:28:16.612 | 99.99th=[ 368] 00:28:16.612 bw ( KiB/s): min= 192, max= 384, per=5.45%, avg=281.60, stdev=56.49, samples=20 00:28:16.612 iops : min= 48, max= 96, avg=70.40, stdev=14.12, samples=20 00:28:16.612 lat (msec) : 250=69.44%, 500=30.56% 00:28:16.612 cpu : usr=98.94%, sys=0.66%, ctx=15, majf=0, minf=9 00:28:16.612 IO depths : 1=1.5%, 2=7.6%, 4=24.6%, 8=55.3%, 16=11.0%, 32=0.0%, >=64=0.0% 00:28:16.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.613 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.613 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.613 filename2: (groupid=0, jobs=1): err= 0: pid=3161100: Sun May 12 07:08:21 2024 00:28:16.613 read: IOPS=63, BW=255KiB/s (261kB/s)(2584KiB/10126msec) 00:28:16.613 slat (nsec): min=7868, max=47354, avg=13512.61, stdev=7105.63 00:28:16.613 clat (msec): min=160, max=401, avg=250.69, stdev=56.47 00:28:16.613 lat (msec): min=160, max=401, avg=250.70, stdev=56.47 00:28:16.613 clat percentiles (msec): 00:28:16.613 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 184], 20.00th=[ 201], 00:28:16.613 | 30.00th=[ 215], 40.00th=[ 224], 50.00th=[ 236], 60.00th=[ 262], 00:28:16.613 | 70.00th=[ 284], 80.00th=[ 317], 90.00th=[ 326], 95.00th=[ 334], 00:28:16.613 | 99.00th=[ 384], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:28:16.613 | 99.99th=[ 401] 00:28:16.613 bw ( KiB/s): min= 128, max= 368, per=4.89%, avg=252.00, stdev=70.39, samples=20 00:28:16.613 iops : min= 32, max= 92, avg=63.00, stdev=17.60, samples=20 00:28:16.613 lat (msec) : 250=57.59%, 500=42.41% 00:28:16.613 cpu : usr=98.71%, sys=0.89%, ctx=16, majf=0, minf=9 00:28:16.613 IO depths : 1=2.0%, 2=5.7%, 4=17.3%, 8=64.4%, 16=10.5%, 32=0.0%, >=64=0.0% 00:28:16.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.613 complete : 0=0.0%, 4=92.0%, 8=2.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.613 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.613 filename2: (groupid=0, jobs=1): err= 0: pid=3161101: Sun May 12 07:08:21 2024 00:28:16.613 read: IOPS=49, BW=197KiB/s (201kB/s)(1984KiB/10089msec) 00:28:16.613 slat (nsec): min=8327, max=61140, avg=30552.14, stdev=9039.19 00:28:16.613 clat (msec): min=200, max=470, avg=325.16, stdev=36.33 00:28:16.613 lat (msec): min=200, max=470, avg=325.19, stdev=36.33 00:28:16.613 clat percentiles (msec): 00:28:16.613 | 1.00th=[ 253], 5.00th=[ 284], 10.00th=[ 296], 20.00th=[ 309], 00:28:16.613 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 326], 00:28:16.613 | 70.00th=[ 334], 80.00th=[ 342], 90.00th=[ 351], 95.00th=[ 355], 00:28:16.613 | 99.00th=[ 472], 99.50th=[ 472], 99.90th=[ 472], 99.95th=[ 472], 00:28:16.613 | 99.99th=[ 472] 00:28:16.613 bw ( KiB/s): min= 128, max= 256, per=3.72%, avg=192.00, stdev=62.72, samples=20 00:28:16.613 iops : min= 32, max= 64, avg=48.00, stdev=15.68, samples=20 00:28:16.613 lat (msec) : 250=0.81%, 500=99.19% 00:28:16.613 cpu : usr=98.72%, sys=0.87%, ctx=21, majf=0, minf=9 00:28:16.613 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:28:16.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.613 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.613 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.613 filename2: (groupid=0, jobs=1): err= 0: pid=3161102: Sun May 12 07:08:21 2024 00:28:16.613 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10110msec) 00:28:16.613 slat (nsec): min=7933, max=30848, avg=13890.87, stdev=4015.07 00:28:16.613 clat (msec): min=112, max=520, avg=325.99, stdev=59.76 00:28:16.613 lat (msec): min=112, max=520, avg=326.00, stdev=59.76 00:28:16.613 clat percentiles (msec): 00:28:16.613 | 1.00th=[ 113], 5.00th=[ 305], 10.00th=[ 305], 20.00th=[ 313], 00:28:16.613 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 330], 00:28:16.613 | 70.00th=[ 334], 80.00th=[ 342], 90.00th=[ 368], 95.00th=[ 368], 00:28:16.613 | 99.00th=[ 523], 99.50th=[ 523], 99.90th=[ 523], 99.95th=[ 523], 00:28:16.613 | 99.99th=[ 523] 00:28:16.613 bw ( KiB/s): min= 128, max= 272, per=3.92%, avg=202.11, stdev=65.15, samples=19 00:28:16.613 iops : min= 32, max= 68, avg=50.53, stdev=16.29, samples=19 00:28:16.613 lat (msec) : 250=4.84%, 500=91.94%, 750=3.23% 00:28:16.613 cpu : usr=98.75%, sys=0.83%, ctx=15, majf=0, minf=9 00:28:16.613 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:28:16.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.613 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.613 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:16.613 00:28:16.613 Run status group 0 (all jobs): 00:28:16.613 READ: bw=5158KiB/s (5282kB/s), 196KiB/s-299KiB/s (201kB/s-306kB/s), io=51.1MiB (53.6MB), run=10015-10145msec 00:28:16.613 07:08:21 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:16.613 07:08:21 -- target/dif.sh@43 -- # local sub 00:28:16.613 07:08:21 -- target/dif.sh@45 -- # for sub in "$@" 00:28:16.613 07:08:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:16.613 07:08:21 -- target/dif.sh@36 -- # local sub_id=0 00:28:16.613 07:08:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@45 -- # for sub in "$@" 00:28:16.613 07:08:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:16.613 07:08:21 -- target/dif.sh@36 -- # local sub_id=1 00:28:16.613 07:08:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@45 -- # for sub in "$@" 00:28:16.613 07:08:21 -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:16.613 07:08:21 -- target/dif.sh@36 -- # local sub_id=2 00:28:16.613 07:08:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@115 -- # NULL_DIF=1 00:28:16.613 07:08:21 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:16.613 07:08:21 -- target/dif.sh@115 -- # numjobs=2 00:28:16.613 07:08:21 -- target/dif.sh@115 -- # iodepth=8 00:28:16.613 07:08:21 -- target/dif.sh@115 -- # runtime=5 00:28:16.613 07:08:21 -- target/dif.sh@115 -- # files=1 00:28:16.613 07:08:21 -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:16.613 07:08:21 -- target/dif.sh@28 -- # local sub 00:28:16.613 07:08:21 -- target/dif.sh@30 -- # for sub in "$@" 00:28:16.613 07:08:21 -- target/dif.sh@31 -- # create_subsystem 0 00:28:16.613 07:08:21 -- target/dif.sh@18 -- # local sub_id=0 00:28:16.613 07:08:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 bdev_null0 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 [2024-05-12 07:08:21.965709] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@30 -- # for sub in "$@" 00:28:16.613 07:08:21 -- target/dif.sh@31 -- # create_subsystem 1 00:28:16.613 07:08:21 -- target/dif.sh@18 -- # local sub_id=1 00:28:16.613 07:08:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 bdev_null1 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 07:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.613 07:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.613 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:16.613 07:08:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.613 07:08:22 -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:16.613 07:08:22 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:16.613 07:08:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.613 07:08:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:16.613 07:08:22 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.613 07:08:22 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:16.614 07:08:22 -- nvmf/common.sh@520 -- # config=() 00:28:16.614 07:08:22 -- target/dif.sh@82 -- # gen_fio_conf 00:28:16.614 07:08:22 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:16.614 07:08:22 -- nvmf/common.sh@520 -- # local subsystem config 00:28:16.614 07:08:22 -- target/dif.sh@54 -- # local file 00:28:16.614 07:08:22 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:16.614 07:08:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:16.614 07:08:22 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.614 07:08:22 -- target/dif.sh@56 -- # cat 00:28:16.614 07:08:22 -- common/autotest_common.sh@1320 -- # shift 00:28:16.614 07:08:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:16.614 { 00:28:16.614 "params": { 00:28:16.614 "name": "Nvme$subsystem", 00:28:16.614 "trtype": "$TEST_TRANSPORT", 00:28:16.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.614 "adrfam": "ipv4", 00:28:16.614 "trsvcid": "$NVMF_PORT", 00:28:16.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.614 "hdgst": ${hdgst:-false}, 00:28:16.614 "ddgst": ${ddgst:-false} 00:28:16.614 }, 00:28:16.614 "method": "bdev_nvme_attach_controller" 00:28:16.614 } 00:28:16.614 EOF 00:28:16.614 )") 00:28:16.614 07:08:22 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:16.614 07:08:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.614 07:08:22 -- nvmf/common.sh@542 -- # cat 00:28:16.614 07:08:22 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.614 07:08:22 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:16.614 07:08:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:16.614 07:08:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:16.614 07:08:22 -- target/dif.sh@72 -- # (( file <= files )) 00:28:16.614 07:08:22 -- target/dif.sh@73 -- # cat 00:28:16.614 07:08:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:16.614 07:08:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:16.614 { 00:28:16.614 "params": { 00:28:16.614 "name": "Nvme$subsystem", 00:28:16.614 "trtype": "$TEST_TRANSPORT", 00:28:16.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.614 "adrfam": "ipv4", 00:28:16.614 "trsvcid": "$NVMF_PORT", 00:28:16.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.614 "hdgst": ${hdgst:-false}, 00:28:16.614 "ddgst": ${ddgst:-false} 00:28:16.614 }, 00:28:16.614 "method": "bdev_nvme_attach_controller" 00:28:16.614 } 00:28:16.614 EOF 00:28:16.614 )") 00:28:16.614 07:08:22 -- target/dif.sh@72 -- # (( file++ )) 00:28:16.614 07:08:22 -- target/dif.sh@72 -- # (( file <= files )) 00:28:16.614 07:08:22 -- nvmf/common.sh@542 -- # cat 00:28:16.614 07:08:22 -- nvmf/common.sh@544 -- # jq . 00:28:16.614 07:08:22 -- nvmf/common.sh@545 -- # IFS=, 00:28:16.614 07:08:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:16.614 "params": { 00:28:16.614 "name": "Nvme0", 00:28:16.614 "trtype": "tcp", 00:28:16.614 "traddr": "10.0.0.2", 00:28:16.614 "adrfam": "ipv4", 00:28:16.614 "trsvcid": "4420", 00:28:16.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:16.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:16.614 "hdgst": false, 00:28:16.614 "ddgst": false 00:28:16.614 }, 00:28:16.614 "method": "bdev_nvme_attach_controller" 00:28:16.614 },{ 00:28:16.614 "params": { 00:28:16.614 "name": "Nvme1", 00:28:16.614 "trtype": "tcp", 00:28:16.614 "traddr": "10.0.0.2", 00:28:16.614 "adrfam": "ipv4", 00:28:16.614 "trsvcid": "4420", 00:28:16.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:16.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:16.614 "hdgst": false, 00:28:16.614 "ddgst": false 00:28:16.614 }, 00:28:16.614 "method": "bdev_nvme_attach_controller" 00:28:16.614 }' 00:28:16.614 07:08:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:16.614 07:08:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:16.614 07:08:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.614 07:08:22 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.614 07:08:22 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:16.614 07:08:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:16.614 07:08:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:16.614 07:08:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:16.614 07:08:22 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:16.614 07:08:22 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.614 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:16.614 ... 00:28:16.614 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:16.614 ... 00:28:16.614 fio-3.35 00:28:16.614 Starting 4 threads 00:28:16.614 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.614 [2024-05-12 07:08:22.917211] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:16.614 [2024-05-12 07:08:22.917266] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:21.885 00:28:21.885 filename0: (groupid=0, jobs=1): err= 0: pid=3162397: Sun May 12 07:08:28 2024 00:28:21.885 read: IOPS=2002, BW=15.6MiB/s (16.4MB/s)(78.3MiB/5002msec) 00:28:21.885 slat (nsec): min=6788, max=50889, avg=13715.24, stdev=7187.99 00:28:21.885 clat (usec): min=1244, max=6710, avg=3952.09, stdev=624.79 00:28:21.885 lat (usec): min=1251, max=6719, avg=3965.80, stdev=624.29 00:28:21.885 clat percentiles (usec): 00:28:21.885 | 1.00th=[ 2737], 5.00th=[ 3195], 10.00th=[ 3359], 20.00th=[ 3556], 00:28:21.885 | 30.00th=[ 3687], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3916], 00:28:21.885 | 70.00th=[ 3982], 80.00th=[ 4228], 90.00th=[ 4817], 95.00th=[ 5407], 00:28:21.885 | 99.00th=[ 5997], 99.50th=[ 6128], 99.90th=[ 6325], 99.95th=[ 6325], 00:28:21.885 | 99.99th=[ 6718] 00:28:21.885 bw ( KiB/s): min=15520, max=16464, per=24.85%, avg=16019.20, stdev=249.68, samples=10 00:28:21.885 iops : min= 1940, max= 2058, avg=2002.40, stdev=31.21, samples=10 00:28:21.885 lat (msec) : 2=0.14%, 4=72.61%, 10=27.25% 00:28:21.885 cpu : usr=95.46%, sys=4.08%, ctx=7, majf=0, minf=30 00:28:21.885 IO depths : 1=0.2%, 2=3.5%, 4=67.7%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:21.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.885 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.885 issued rwts: total=10019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.885 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:21.885 filename0: (groupid=0, jobs=1): err= 0: pid=3162398: Sun May 12 07:08:28 2024 00:28:21.885 read: IOPS=2042, BW=16.0MiB/s (16.7MB/s)(79.8MiB/5003msec) 00:28:21.885 slat (nsec): min=3759, max=50821, avg=12141.56, stdev=5270.22 00:28:21.885 clat (usec): min=1415, max=6986, avg=3877.95, stdev=561.85 00:28:21.885 lat (usec): min=1423, max=7000, avg=3890.09, stdev=561.67 00:28:21.885 clat percentiles (usec): 00:28:21.885 | 1.00th=[ 2704], 5.00th=[ 3097], 10.00th=[ 3326], 20.00th=[ 3523], 00:28:21.885 | 30.00th=[ 3654], 40.00th=[ 3752], 50.00th=[ 3851], 60.00th=[ 3884], 00:28:21.885 | 70.00th=[ 3949], 80.00th=[ 4113], 90.00th=[ 4490], 95.00th=[ 5080], 00:28:21.885 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6521], 99.95th=[ 6652], 00:28:21.885 | 99.99th=[ 6849] 00:28:21.885 bw ( KiB/s): min=16080, max=16656, per=25.35%, avg=16340.90, stdev=190.47, samples=10 00:28:21.885 iops : min= 2010, max= 2082, avg=2042.60, stdev=23.80, samples=10 00:28:21.885 lat (msec) : 2=0.07%, 4=75.33%, 10=24.61% 00:28:21.885 cpu : usr=95.52%, sys=3.88%, ctx=7, majf=0, minf=56 00:28:21.885 IO depths : 1=0.1%, 2=4.0%, 4=68.5%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:21.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.885 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.885 issued rwts: total=10217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.885 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:21.885 filename1: (groupid=0, jobs=1): err= 0: pid=3162399: Sun May 12 07:08:28 2024 00:28:21.885 read: IOPS=2045, BW=16.0MiB/s (16.8MB/s)(79.9MiB/5003msec) 00:28:21.885 slat (nsec): min=3749, max=50631, avg=12448.14, stdev=5274.40 00:28:21.885 clat (usec): min=1169, max=6662, avg=3873.33, stdev=618.71 00:28:21.885 lat (usec): min=1178, max=6672, avg=3885.77, stdev=618.70 00:28:21.885 clat percentiles (usec): 00:28:21.885 | 1.00th=[ 2704], 5.00th=[ 3032], 10.00th=[ 3261], 20.00th=[ 3490], 00:28:21.885 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3818], 60.00th=[ 3884], 00:28:21.885 | 70.00th=[ 3916], 80.00th=[ 4015], 90.00th=[ 4686], 95.00th=[ 5342], 00:28:21.885 | 99.00th=[ 5997], 99.50th=[ 6128], 99.90th=[ 6390], 99.95th=[ 6456], 00:28:21.885 | 99.99th=[ 6652] 00:28:21.885 bw ( KiB/s): min=16080, max=16752, per=25.39%, avg=16368.00, stdev=217.17, samples=10 00:28:21.885 iops : min= 2010, max= 2094, avg=2046.00, stdev=27.15, samples=10 00:28:21.885 lat (msec) : 2=0.03%, 4=78.94%, 10=21.03% 00:28:21.885 cpu : usr=95.46%, sys=4.02%, ctx=5, majf=0, minf=30 00:28:21.885 IO depths : 1=0.2%, 2=1.6%, 4=69.3%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:21.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.885 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.885 issued rwts: total=10233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.885 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:21.885 filename1: (groupid=0, jobs=1): err= 0: pid=3162400: Sun May 12 07:08:28 2024 00:28:21.885 read: IOPS=1967, BW=15.4MiB/s (16.1MB/s)(76.9MiB/5003msec) 00:28:21.885 slat (nsec): min=3741, max=60037, avg=11773.36, stdev=5158.04 00:28:21.885 clat (usec): min=2283, max=46600, avg=4029.46, stdev=1355.09 00:28:21.885 lat (usec): min=2297, max=46612, avg=4041.23, stdev=1354.87 00:28:21.885 clat percentiles (usec): 00:28:21.885 | 1.00th=[ 2999], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3621], 00:28:21.885 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3916], 00:28:21.885 | 70.00th=[ 4015], 80.00th=[ 4293], 90.00th=[ 4817], 95.00th=[ 5407], 00:28:21.885 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 6980], 99.95th=[46400], 00:28:21.885 | 99.99th=[46400] 00:28:21.885 bw ( KiB/s): min=14621, max=16256, per=24.41%, avg=15735.70, stdev=430.67, samples=10 00:28:21.885 iops : min= 1827, max= 2032, avg=1966.90, stdev=54.01, samples=10 00:28:21.885 lat (msec) : 4=69.67%, 10=30.25%, 50=0.08% 00:28:21.885 cpu : usr=95.28%, sys=4.14%, ctx=10, majf=0, minf=62 00:28:21.885 IO depths : 1=0.2%, 2=3.0%, 4=69.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:21.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.885 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.885 issued rwts: total=9841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.885 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:21.885 00:28:21.885 Run status group 0 (all jobs): 00:28:21.885 READ: bw=62.9MiB/s (66.0MB/s), 15.4MiB/s-16.0MiB/s (16.1MB/s-16.8MB/s), io=315MiB (330MB), run=5002-5003msec 00:28:21.885 07:08:28 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:21.885 07:08:28 -- target/dif.sh@43 -- # local sub 00:28:21.885 07:08:28 -- target/dif.sh@45 -- # for sub in "$@" 00:28:21.885 07:08:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:21.885 07:08:28 -- target/dif.sh@36 -- # local sub_id=0 00:28:21.885 07:08:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:21.885 07:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.885 07:08:28 -- common/autotest_common.sh@10 -- # set +x 00:28:21.885 07:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.885 07:08:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:21.885 07:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.885 07:08:28 -- common/autotest_common.sh@10 -- # set +x 00:28:21.885 07:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.885 07:08:28 -- target/dif.sh@45 -- # for sub in "$@" 00:28:21.885 07:08:28 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:21.885 07:08:28 -- target/dif.sh@36 -- # local sub_id=1 00:28:21.885 07:08:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:21.885 07:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.885 07:08:28 -- common/autotest_common.sh@10 -- # set +x 00:28:21.885 07:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.885 07:08:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:21.885 07:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.885 07:08:28 -- common/autotest_common.sh@10 -- # set +x 00:28:21.885 07:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.885 00:28:21.885 real 0m24.217s 00:28:21.885 user 4m36.356s 00:28:21.885 sys 0m4.934s 00:28:21.885 07:08:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:21.885 07:08:28 -- common/autotest_common.sh@10 -- # set +x 00:28:21.885 ************************************ 00:28:21.885 END TEST fio_dif_rand_params 00:28:21.885 ************************************ 00:28:21.885 07:08:28 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:21.885 07:08:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:21.885 07:08:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:21.885 07:08:28 -- common/autotest_common.sh@10 -- # set +x 00:28:21.885 ************************************ 00:28:21.885 START TEST fio_dif_digest 00:28:21.885 ************************************ 00:28:21.885 07:08:28 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:28:21.885 07:08:28 -- target/dif.sh@123 -- # local NULL_DIF 00:28:21.885 07:08:28 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:21.885 07:08:28 -- target/dif.sh@125 -- # local hdgst ddgst 00:28:21.885 07:08:28 -- target/dif.sh@127 -- # NULL_DIF=3 00:28:21.885 07:08:28 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:21.885 07:08:28 -- target/dif.sh@127 -- # numjobs=3 00:28:21.885 07:08:28 -- target/dif.sh@127 -- # iodepth=3 00:28:21.885 07:08:28 -- target/dif.sh@127 -- # runtime=10 00:28:21.885 07:08:28 -- target/dif.sh@128 -- # hdgst=true 00:28:21.885 07:08:28 -- target/dif.sh@128 -- # ddgst=true 00:28:21.885 07:08:28 -- target/dif.sh@130 -- # create_subsystems 0 00:28:21.885 07:08:28 -- target/dif.sh@28 -- # local sub 00:28:21.886 07:08:28 -- target/dif.sh@30 -- # for sub in "$@" 00:28:21.886 07:08:28 -- target/dif.sh@31 -- # create_subsystem 0 00:28:21.886 07:08:28 -- target/dif.sh@18 -- # local sub_id=0 00:28:21.886 07:08:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:21.886 07:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.886 07:08:28 -- common/autotest_common.sh@10 -- # set +x 00:28:21.886 bdev_null0 00:28:21.886 07:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.886 07:08:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:21.886 07:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.886 07:08:28 -- common/autotest_common.sh@10 -- # set +x 00:28:21.886 07:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.886 07:08:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:21.886 07:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.886 07:08:28 -- common/autotest_common.sh@10 -- # set +x 00:28:21.886 07:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.886 07:08:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:21.886 07:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.886 07:08:28 -- common/autotest_common.sh@10 -- # set +x 00:28:21.886 [2024-05-12 07:08:28.412664] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.886 07:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.886 07:08:28 -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:21.886 07:08:28 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:21.886 07:08:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:21.886 07:08:28 -- nvmf/common.sh@520 -- # config=() 00:28:21.886 07:08:28 -- nvmf/common.sh@520 -- # local subsystem config 00:28:21.886 07:08:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:21.886 07:08:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.886 07:08:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:21.886 { 00:28:21.886 "params": { 00:28:21.886 "name": "Nvme$subsystem", 00:28:21.886 "trtype": "$TEST_TRANSPORT", 00:28:21.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.886 "adrfam": "ipv4", 00:28:21.886 "trsvcid": "$NVMF_PORT", 00:28:21.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.886 "hdgst": ${hdgst:-false}, 00:28:21.886 "ddgst": ${ddgst:-false} 00:28:21.886 }, 00:28:21.886 "method": "bdev_nvme_attach_controller" 00:28:21.886 } 00:28:21.886 EOF 00:28:21.886 )") 00:28:21.886 07:08:28 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.886 07:08:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:21.886 07:08:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:21.886 07:08:28 -- target/dif.sh@82 -- # gen_fio_conf 00:28:21.886 07:08:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:21.886 07:08:28 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:21.886 07:08:28 -- target/dif.sh@54 -- # local file 00:28:21.886 07:08:28 -- common/autotest_common.sh@1320 -- # shift 00:28:21.886 07:08:28 -- nvmf/common.sh@542 -- # cat 00:28:21.886 07:08:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:21.886 07:08:28 -- target/dif.sh@56 -- # cat 00:28:21.886 07:08:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:21.886 07:08:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:21.886 07:08:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:21.886 07:08:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:21.886 07:08:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:21.886 07:08:28 -- target/dif.sh@72 -- # (( file <= files )) 00:28:21.886 07:08:28 -- nvmf/common.sh@544 -- # jq . 00:28:21.886 07:08:28 -- nvmf/common.sh@545 -- # IFS=, 00:28:21.886 07:08:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:21.886 "params": { 00:28:21.886 "name": "Nvme0", 00:28:21.886 "trtype": "tcp", 00:28:21.886 "traddr": "10.0.0.2", 00:28:21.886 "adrfam": "ipv4", 00:28:21.886 "trsvcid": "4420", 00:28:21.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:21.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:21.886 "hdgst": true, 00:28:21.886 "ddgst": true 00:28:21.886 }, 00:28:21.886 "method": "bdev_nvme_attach_controller" 00:28:21.886 }' 00:28:21.886 07:08:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:21.886 07:08:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:21.886 07:08:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:21.886 07:08:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:21.886 07:08:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:21.886 07:08:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:21.886 07:08:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:21.886 07:08:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:21.886 07:08:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:21.886 07:08:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.886 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:21.886 ... 00:28:21.886 fio-3.35 00:28:21.886 Starting 3 threads 00:28:21.886 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.144 [2024-05-12 07:08:29.048159] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:22.144 [2024-05-12 07:08:29.048232] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:32.105 00:28:32.105 filename0: (groupid=0, jobs=1): err= 0: pid=3163173: Sun May 12 07:08:39 2024 00:28:32.105 read: IOPS=164, BW=20.5MiB/s (21.5MB/s)(206MiB/10048msec) 00:28:32.105 slat (nsec): min=4751, max=45331, avg=18983.43, stdev=4829.31 00:28:32.105 clat (usec): min=9120, max=98429, avg=18208.78, stdev=11158.05 00:28:32.105 lat (usec): min=9138, max=98449, avg=18227.76, stdev=11158.29 00:28:32.105 clat percentiles (usec): 00:28:32.105 | 1.00th=[ 9765], 5.00th=[11469], 10.00th=[13304], 20.00th=[14091], 00:28:32.105 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15401], 60.00th=[15795], 00:28:32.105 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17957], 95.00th=[55313], 00:28:32.105 | 99.00th=[57410], 99.50th=[58459], 99.90th=[98042], 99.95th=[98042], 00:28:32.105 | 99.99th=[98042] 00:28:32.105 bw ( KiB/s): min=17920, max=25088, per=27.36%, avg=21094.40, stdev=2142.49, samples=20 00:28:32.105 iops : min= 140, max= 196, avg=164.80, stdev=16.74, samples=20 00:28:32.105 lat (msec) : 10=1.33%, 20=91.04%, 50=0.24%, 100=7.39% 00:28:32.105 cpu : usr=94.89%, sys=4.67%, ctx=17, majf=0, minf=110 00:28:32.105 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:32.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.106 issued rwts: total=1651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:32.106 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:32.106 filename0: (groupid=0, jobs=1): err= 0: pid=3163174: Sun May 12 07:08:39 2024 00:28:32.106 read: IOPS=192, BW=24.1MiB/s (25.3MB/s)(242MiB/10046msec) 00:28:32.106 slat (nsec): min=4845, max=43411, avg=16853.83, stdev=3842.04 00:28:32.106 clat (usec): min=6735, max=59736, avg=15515.43, stdev=5272.16 00:28:32.106 lat (usec): min=6749, max=59756, avg=15532.28, stdev=5272.44 00:28:32.106 clat percentiles (usec): 00:28:32.106 | 1.00th=[ 8586], 5.00th=[11600], 10.00th=[12387], 20.00th=[13435], 00:28:32.106 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15270], 60.00th=[15664], 00:28:32.106 | 70.00th=[16057], 80.00th=[16581], 90.00th=[17433], 95.00th=[18220], 00:28:32.106 | 99.00th=[56361], 99.50th=[58459], 99.90th=[59507], 99.95th=[59507], 00:28:32.106 | 99.99th=[59507] 00:28:32.106 bw ( KiB/s): min=20480, max=27136, per=32.11%, avg=24757.70, stdev=1583.27, samples=20 00:28:32.106 iops : min= 160, max= 212, avg=193.40, stdev=12.36, samples=20 00:28:32.106 lat (msec) : 10=2.79%, 20=95.66%, 50=0.21%, 100=1.34% 00:28:32.106 cpu : usr=94.72%, sys=4.53%, ctx=30, majf=0, minf=143 00:28:32.106 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:32.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.106 issued rwts: total=1937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:32.106 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:32.106 filename0: (groupid=0, jobs=1): err= 0: pid=3163175: Sun May 12 07:08:39 2024 00:28:32.106 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(308MiB/10047msec) 00:28:32.106 slat (nsec): min=4751, max=97476, avg=16659.27, stdev=4382.55 00:28:32.106 clat (usec): min=6633, max=55291, avg=12189.42, stdev=3817.17 00:28:32.106 lat (usec): min=6647, max=55301, avg=12206.07, stdev=3817.53 00:28:32.106 clat percentiles (usec): 00:28:32.106 | 1.00th=[ 8029], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10028], 00:28:32.106 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12256], 60.00th=[12649], 00:28:32.106 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13829], 95.00th=[14222], 00:28:32.106 | 99.00th=[15664], 99.50th=[52167], 99.90th=[53740], 99.95th=[54789], 00:28:32.106 | 99.99th=[55313] 00:28:32.106 bw ( KiB/s): min=26880, max=34816, per=40.87%, avg=31513.60, stdev=2237.78, samples=20 00:28:32.106 iops : min= 210, max= 272, avg=246.20, stdev=17.48, samples=20 00:28:32.106 lat (msec) : 10=19.39%, 20=79.80%, 50=0.16%, 100=0.65% 00:28:32.106 cpu : usr=94.29%, sys=5.15%, ctx=44, majf=0, minf=198 00:28:32.106 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:32.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.106 issued rwts: total=2465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:32.106 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:32.106 00:28:32.106 Run status group 0 (all jobs): 00:28:32.106 READ: bw=75.3MiB/s (79.0MB/s), 20.5MiB/s-30.7MiB/s (21.5MB/s-32.2MB/s), io=757MiB (793MB), run=10046-10048msec 00:28:32.364 07:08:39 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:32.364 07:08:39 -- target/dif.sh@43 -- # local sub 00:28:32.364 07:08:39 -- target/dif.sh@45 -- # for sub in "$@" 00:28:32.364 07:08:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:32.364 07:08:39 -- target/dif.sh@36 -- # local sub_id=0 00:28:32.364 07:08:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:32.364 07:08:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.364 07:08:39 -- common/autotest_common.sh@10 -- # set +x 00:28:32.364 07:08:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.364 07:08:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:32.364 07:08:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.364 07:08:39 -- common/autotest_common.sh@10 -- # set +x 00:28:32.364 07:08:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.364 00:28:32.364 real 0m11.081s 00:28:32.364 user 0m29.532s 00:28:32.364 sys 0m1.735s 00:28:32.364 07:08:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.364 07:08:39 -- common/autotest_common.sh@10 -- # set +x 00:28:32.364 ************************************ 00:28:32.364 END TEST fio_dif_digest 00:28:32.364 ************************************ 00:28:32.364 07:08:39 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:32.364 07:08:39 -- target/dif.sh@147 -- # nvmftestfini 00:28:32.364 07:08:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:32.364 07:08:39 -- nvmf/common.sh@116 -- # sync 00:28:32.364 07:08:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:32.364 07:08:39 -- nvmf/common.sh@119 -- # set +e 00:28:32.364 07:08:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:32.364 07:08:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:32.364 rmmod nvme_tcp 00:28:32.630 rmmod nvme_fabrics 00:28:32.630 rmmod nvme_keyring 00:28:32.630 07:08:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:32.630 07:08:39 -- nvmf/common.sh@123 -- # set -e 00:28:32.630 07:08:39 -- nvmf/common.sh@124 -- # return 0 00:28:32.630 07:08:39 -- nvmf/common.sh@477 -- # '[' -n 3157039 ']' 00:28:32.630 07:08:39 -- nvmf/common.sh@478 -- # killprocess 3157039 00:28:32.630 07:08:39 -- common/autotest_common.sh@926 -- # '[' -z 3157039 ']' 00:28:32.630 07:08:39 -- common/autotest_common.sh@930 -- # kill -0 3157039 00:28:32.630 07:08:39 -- common/autotest_common.sh@931 -- # uname 00:28:32.630 07:08:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:32.630 07:08:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3157039 00:28:32.630 07:08:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:32.630 07:08:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:32.630 07:08:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3157039' 00:28:32.630 killing process with pid 3157039 00:28:32.630 07:08:39 -- common/autotest_common.sh@945 -- # kill 3157039 00:28:32.630 07:08:39 -- common/autotest_common.sh@950 -- # wait 3157039 00:28:32.899 07:08:39 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:28:32.899 07:08:39 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:33.833 Waiting for block devices as requested 00:28:33.833 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:28:34.092 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:34.092 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:34.092 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:34.092 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:34.350 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:34.350 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:34.350 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:34.350 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:34.350 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:34.608 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:34.608 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:34.608 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:34.866 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:34.866 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:34.866 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:34.866 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:35.124 07:08:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:35.124 07:08:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:35.124 07:08:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:35.124 07:08:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:35.124 07:08:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.124 07:08:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:35.124 07:08:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.021 07:08:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:37.021 00:28:37.021 real 1m6.601s 00:28:37.021 user 6m31.230s 00:28:37.021 sys 0m16.281s 00:28:37.021 07:08:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:37.021 07:08:44 -- common/autotest_common.sh@10 -- # set +x 00:28:37.021 ************************************ 00:28:37.021 END TEST nvmf_dif 00:28:37.021 ************************************ 00:28:37.021 07:08:44 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:37.021 07:08:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:37.021 07:08:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:37.021 07:08:44 -- common/autotest_common.sh@10 -- # set +x 00:28:37.021 ************************************ 00:28:37.021 START TEST nvmf_abort_qd_sizes 00:28:37.021 ************************************ 00:28:37.021 07:08:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:37.279 * Looking for test storage... 00:28:37.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:37.279 07:08:44 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.279 07:08:44 -- nvmf/common.sh@7 -- # uname -s 00:28:37.279 07:08:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.279 07:08:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.279 07:08:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.279 07:08:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.279 07:08:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.279 07:08:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.279 07:08:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.279 07:08:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.279 07:08:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.279 07:08:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.279 07:08:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:37.279 07:08:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:37.279 07:08:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.279 07:08:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.279 07:08:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.279 07:08:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.279 07:08:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.279 07:08:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.279 07:08:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.279 07:08:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.279 07:08:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.279 07:08:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.279 07:08:44 -- paths/export.sh@5 -- # export PATH 00:28:37.279 07:08:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.279 07:08:44 -- nvmf/common.sh@46 -- # : 0 00:28:37.279 07:08:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:37.279 07:08:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:37.279 07:08:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:37.279 07:08:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.279 07:08:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.279 07:08:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:37.279 07:08:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:37.279 07:08:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:37.279 07:08:44 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:28:37.279 07:08:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:37.279 07:08:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.279 07:08:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:37.279 07:08:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:37.279 07:08:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:37.279 07:08:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.279 07:08:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:37.279 07:08:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.279 07:08:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:37.279 07:08:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:37.279 07:08:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:37.279 07:08:44 -- common/autotest_common.sh@10 -- # set +x 00:28:39.180 07:08:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:39.180 07:08:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:39.180 07:08:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:39.180 07:08:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:39.180 07:08:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:39.180 07:08:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:39.180 07:08:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:39.180 07:08:46 -- nvmf/common.sh@294 -- # net_devs=() 00:28:39.181 07:08:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:39.181 07:08:46 -- nvmf/common.sh@295 -- # e810=() 00:28:39.181 07:08:46 -- nvmf/common.sh@295 -- # local -ga e810 00:28:39.181 07:08:46 -- nvmf/common.sh@296 -- # x722=() 00:28:39.181 07:08:46 -- nvmf/common.sh@296 -- # local -ga x722 00:28:39.181 07:08:46 -- nvmf/common.sh@297 -- # mlx=() 00:28:39.181 07:08:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:39.181 07:08:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.181 07:08:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.181 07:08:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.181 07:08:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.181 07:08:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.181 07:08:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.181 07:08:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.181 07:08:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.181 07:08:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.181 07:08:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.181 07:08:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.181 07:08:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:39.181 07:08:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:39.181 07:08:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:39.181 07:08:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:39.181 07:08:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:39.181 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:39.181 07:08:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:39.181 07:08:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:39.181 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:39.181 07:08:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:39.181 07:08:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:39.181 07:08:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.181 07:08:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:39.181 07:08:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.181 07:08:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:39.181 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:39.181 07:08:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.181 07:08:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:39.181 07:08:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.181 07:08:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:39.181 07:08:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.181 07:08:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:39.181 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:39.181 07:08:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.181 07:08:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:39.181 07:08:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:39.181 07:08:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:39.181 07:08:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:39.181 07:08:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.181 07:08:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.181 07:08:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.181 07:08:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:39.181 07:08:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.181 07:08:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.181 07:08:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:39.181 07:08:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.181 07:08:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.181 07:08:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:39.181 07:08:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:39.181 07:08:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.181 07:08:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.439 07:08:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.439 07:08:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.439 07:08:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:39.439 07:08:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.439 07:08:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.439 07:08:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.439 07:08:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:39.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:28:39.439 00:28:39.439 --- 10.0.0.2 ping statistics --- 00:28:39.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.439 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:28:39.439 07:08:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:28:39.439 00:28:39.439 --- 10.0.0.1 ping statistics --- 00:28:39.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.439 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:28:39.439 07:08:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.439 07:08:46 -- nvmf/common.sh@410 -- # return 0 00:28:39.439 07:08:46 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:28:39.439 07:08:46 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:40.816 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:40.816 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:40.816 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:40.816 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:40.816 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:40.816 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:40.816 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:40.816 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:40.816 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:40.816 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:40.816 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:40.816 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:40.816 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:40.816 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:40.816 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:40.816 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:41.750 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:28:41.750 07:08:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.750 07:08:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:41.750 07:08:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:41.750 07:08:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.750 07:08:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:41.751 07:08:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:41.751 07:08:48 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:28:41.751 07:08:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:41.751 07:08:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:41.751 07:08:48 -- common/autotest_common.sh@10 -- # set +x 00:28:41.751 07:08:48 -- nvmf/common.sh@469 -- # nvmfpid=3168193 00:28:41.751 07:08:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:41.751 07:08:48 -- nvmf/common.sh@470 -- # waitforlisten 3168193 00:28:41.751 07:08:48 -- common/autotest_common.sh@819 -- # '[' -z 3168193 ']' 00:28:41.751 07:08:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.751 07:08:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:41.751 07:08:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.751 07:08:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:41.751 07:08:48 -- common/autotest_common.sh@10 -- # set +x 00:28:41.751 [2024-05-12 07:08:48.782049] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:41.751 [2024-05-12 07:08:48.782133] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.751 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.751 [2024-05-12 07:08:48.851890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.009 [2024-05-12 07:08:48.968556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:42.009 [2024-05-12 07:08:48.968729] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.009 [2024-05-12 07:08:48.968750] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.009 [2024-05-12 07:08:48.968765] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.009 [2024-05-12 07:08:48.968826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.009 [2024-05-12 07:08:48.968896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.009 [2024-05-12 07:08:48.969004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.009 [2024-05-12 07:08:48.969007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.941 07:08:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:42.941 07:08:49 -- common/autotest_common.sh@852 -- # return 0 00:28:42.941 07:08:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:42.941 07:08:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:42.941 07:08:49 -- common/autotest_common.sh@10 -- # set +x 00:28:42.941 07:08:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.941 07:08:49 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:42.941 07:08:49 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:28:42.941 07:08:49 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:28:42.941 07:08:49 -- scripts/common.sh@311 -- # local bdf bdfs 00:28:42.941 07:08:49 -- scripts/common.sh@312 -- # local nvmes 00:28:42.941 07:08:49 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:28:42.941 07:08:49 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:42.941 07:08:49 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:28:42.941 07:08:49 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:28:42.941 07:08:49 -- scripts/common.sh@322 -- # uname -s 00:28:42.941 07:08:49 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:28:42.941 07:08:49 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:28:42.941 07:08:49 -- scripts/common.sh@327 -- # (( 1 )) 00:28:42.941 07:08:49 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:28:42.941 07:08:49 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:28:42.941 07:08:49 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:28:42.941 07:08:49 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:28:42.941 07:08:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:42.941 07:08:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:42.941 07:08:49 -- common/autotest_common.sh@10 -- # set +x 00:28:42.941 ************************************ 00:28:42.941 START TEST spdk_target_abort 00:28:42.941 ************************************ 00:28:42.941 07:08:49 -- common/autotest_common.sh@1104 -- # spdk_target 00:28:42.941 07:08:49 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:42.941 07:08:49 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:28:42.941 07:08:49 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:28:42.941 07:08:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.941 07:08:49 -- common/autotest_common.sh@10 -- # set +x 00:28:45.463 spdk_targetn1 00:28:45.463 07:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.463 07:08:52 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:45.463 07:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.464 07:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:45.464 [2024-05-12 07:08:52.580328] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.464 07:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.464 07:08:52 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:28:45.464 07:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.464 07:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:45.721 07:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:28:45.721 07:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.721 07:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:45.721 07:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:28:45.721 07:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.721 07:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:45.721 [2024-05-12 07:08:52.612587] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.721 07:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:45.721 07:08:52 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:28:45.721 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.041 Initializing NVMe Controllers 00:28:49.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:28:49.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:28:49.042 Initialization complete. Launching workers. 00:28:49.042 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8564, failed: 0 00:28:49.042 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1374, failed to submit 7190 00:28:49.042 success 785, unsuccess 589, failed 0 00:28:49.042 07:08:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:49.042 07:08:55 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:28:49.042 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.337 Initializing NVMe Controllers 00:28:52.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:28:52.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:28:52.337 Initialization complete. Launching workers. 00:28:52.337 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8607, failed: 0 00:28:52.337 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1245, failed to submit 7362 00:28:52.337 success 306, unsuccess 939, failed 0 00:28:52.337 07:08:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:52.337 07:08:59 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:28:52.337 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.620 Initializing NVMe Controllers 00:28:55.620 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:28:55.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:28:55.620 Initialization complete. Launching workers. 00:28:55.620 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 29911, failed: 0 00:28:55.620 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2639, failed to submit 27272 00:28:55.620 success 514, unsuccess 2125, failed 0 00:28:55.620 07:09:02 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:28:55.620 07:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.620 07:09:02 -- common/autotest_common.sh@10 -- # set +x 00:28:55.620 07:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.620 07:09:02 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:55.620 07:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.620 07:09:02 -- common/autotest_common.sh@10 -- # set +x 00:28:56.992 07:09:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.992 07:09:03 -- target/abort_qd_sizes.sh@62 -- # killprocess 3168193 00:28:56.992 07:09:03 -- common/autotest_common.sh@926 -- # '[' -z 3168193 ']' 00:28:56.992 07:09:03 -- common/autotest_common.sh@930 -- # kill -0 3168193 00:28:56.992 07:09:03 -- common/autotest_common.sh@931 -- # uname 00:28:56.992 07:09:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:56.992 07:09:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3168193 00:28:56.992 07:09:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:56.992 07:09:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:56.992 07:09:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3168193' 00:28:56.992 killing process with pid 3168193 00:28:56.992 07:09:03 -- common/autotest_common.sh@945 -- # kill 3168193 00:28:56.992 07:09:03 -- common/autotest_common.sh@950 -- # wait 3168193 00:28:56.992 00:28:56.992 real 0m14.349s 00:28:56.992 user 0m56.430s 00:28:56.992 sys 0m2.725s 00:28:56.992 07:09:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:56.992 07:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:56.992 ************************************ 00:28:56.992 END TEST spdk_target_abort 00:28:56.992 ************************************ 00:28:56.992 07:09:04 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:28:56.992 07:09:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:56.992 07:09:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:56.992 07:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:56.992 ************************************ 00:28:56.992 START TEST kernel_target_abort 00:28:56.992 ************************************ 00:28:56.992 07:09:04 -- common/autotest_common.sh@1104 -- # kernel_target 00:28:56.992 07:09:04 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:28:56.992 07:09:04 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:28:56.992 07:09:04 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:28:56.992 07:09:04 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:28:56.992 07:09:04 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:28:56.992 07:09:04 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:28:56.992 07:09:04 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:56.992 07:09:04 -- nvmf/common.sh@627 -- # local block nvme 00:28:56.992 07:09:04 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:28:56.992 07:09:04 -- nvmf/common.sh@630 -- # modprobe nvmet 00:28:57.250 07:09:04 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:57.250 07:09:04 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:58.186 Waiting for block devices as requested 00:28:58.186 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:28:58.186 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:58.444 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:58.444 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:58.444 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:58.444 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:58.704 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:58.704 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:58.704 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:58.704 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:58.962 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:58.962 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:58.962 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:58.962 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:59.220 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:59.220 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:59.220 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:59.478 07:09:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:28:59.478 07:09:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:59.478 07:09:06 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:28:59.478 07:09:06 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:28:59.478 07:09:06 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:59.478 No valid GPT data, bailing 00:28:59.478 07:09:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:59.478 07:09:06 -- scripts/common.sh@393 -- # pt= 00:28:59.478 07:09:06 -- scripts/common.sh@394 -- # return 1 00:28:59.478 07:09:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:28:59.478 07:09:06 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:28:59.478 07:09:06 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:28:59.478 07:09:06 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:28:59.478 07:09:06 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:59.478 07:09:06 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:28:59.478 07:09:06 -- nvmf/common.sh@654 -- # echo 1 00:28:59.478 07:09:06 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:28:59.478 07:09:06 -- nvmf/common.sh@656 -- # echo 1 00:28:59.478 07:09:06 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:28:59.478 07:09:06 -- nvmf/common.sh@663 -- # echo tcp 00:28:59.478 07:09:06 -- nvmf/common.sh@664 -- # echo 4420 00:28:59.478 07:09:06 -- nvmf/common.sh@665 -- # echo ipv4 00:28:59.478 07:09:06 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:59.478 07:09:06 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:28:59.478 00:28:59.478 Discovery Log Number of Records 2, Generation counter 2 00:28:59.478 =====Discovery Log Entry 0====== 00:28:59.478 trtype: tcp 00:28:59.478 adrfam: ipv4 00:28:59.478 subtype: current discovery subsystem 00:28:59.478 treq: not specified, sq flow control disable supported 00:28:59.478 portid: 1 00:28:59.478 trsvcid: 4420 00:28:59.478 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:59.478 traddr: 10.0.0.1 00:28:59.478 eflags: none 00:28:59.478 sectype: none 00:28:59.478 =====Discovery Log Entry 1====== 00:28:59.478 trtype: tcp 00:28:59.478 adrfam: ipv4 00:28:59.478 subtype: nvme subsystem 00:28:59.478 treq: not specified, sq flow control disable supported 00:28:59.478 portid: 1 00:28:59.478 trsvcid: 4420 00:28:59.478 subnqn: kernel_target 00:28:59.478 traddr: 10.0.0.1 00:28:59.478 eflags: none 00:28:59.478 sectype: none 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:59.478 07:09:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:59.479 07:09:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:59.479 07:09:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:28:59.479 07:09:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:59.479 07:09:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:28:59.479 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.757 Initializing NVMe Controllers 00:29:02.757 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:02.757 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:02.757 Initialization complete. Launching workers. 00:29:02.757 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 27809, failed: 0 00:29:02.757 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27809, failed to submit 0 00:29:02.757 success 0, unsuccess 27809, failed 0 00:29:02.757 07:09:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:02.757 07:09:09 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:02.757 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.035 Initializing NVMe Controllers 00:29:06.035 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:06.035 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:06.035 Initialization complete. Launching workers. 00:29:06.035 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 60190, failed: 0 00:29:06.035 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 15166, failed to submit 45024 00:29:06.035 success 0, unsuccess 15166, failed 0 00:29:06.035 07:09:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:06.035 07:09:12 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:06.035 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.574 Initializing NVMe Controllers 00:29:08.574 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:08.574 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:08.574 Initialization complete. Launching workers. 00:29:08.574 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 59363, failed: 0 00:29:08.574 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 14774, failed to submit 44589 00:29:08.574 success 0, unsuccess 14774, failed 0 00:29:08.574 07:09:15 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:29:08.574 07:09:15 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:29:08.574 07:09:15 -- nvmf/common.sh@677 -- # echo 0 00:29:08.574 07:09:15 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:29:08.574 07:09:15 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:08.574 07:09:15 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:08.574 07:09:15 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:29:08.574 07:09:15 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:29:08.574 07:09:15 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:29:08.574 00:29:08.574 real 0m11.567s 00:29:08.574 user 0m3.922s 00:29:08.574 sys 0m2.510s 00:29:08.574 07:09:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:08.574 07:09:15 -- common/autotest_common.sh@10 -- # set +x 00:29:08.574 ************************************ 00:29:08.574 END TEST kernel_target_abort 00:29:08.574 ************************************ 00:29:08.833 07:09:15 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:29:08.833 07:09:15 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:29:08.833 07:09:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:08.833 07:09:15 -- nvmf/common.sh@116 -- # sync 00:29:08.833 07:09:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:08.833 07:09:15 -- nvmf/common.sh@119 -- # set +e 00:29:08.833 07:09:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:08.833 07:09:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:08.833 rmmod nvme_tcp 00:29:08.833 rmmod nvme_fabrics 00:29:08.833 rmmod nvme_keyring 00:29:08.833 07:09:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:08.833 07:09:15 -- nvmf/common.sh@123 -- # set -e 00:29:08.833 07:09:15 -- nvmf/common.sh@124 -- # return 0 00:29:08.833 07:09:15 -- nvmf/common.sh@477 -- # '[' -n 3168193 ']' 00:29:08.833 07:09:15 -- nvmf/common.sh@478 -- # killprocess 3168193 00:29:08.833 07:09:15 -- common/autotest_common.sh@926 -- # '[' -z 3168193 ']' 00:29:08.833 07:09:15 -- common/autotest_common.sh@930 -- # kill -0 3168193 00:29:08.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3168193) - No such process 00:29:08.833 07:09:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3168193 is not found' 00:29:08.833 Process with pid 3168193 is not found 00:29:08.833 07:09:15 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:29:08.833 07:09:15 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:10.209 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:29:10.209 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:10.209 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:10.209 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:10.209 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:10.209 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:10.209 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:10.209 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:10.209 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:10.209 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:10.209 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:10.209 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:10.209 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:10.209 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:10.209 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:10.209 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:10.209 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:10.209 07:09:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:10.210 07:09:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:10.210 07:09:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:10.210 07:09:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:10.210 07:09:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.210 07:09:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:10.210 07:09:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.112 07:09:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:12.113 00:29:12.113 real 0m35.124s 00:29:12.113 user 1m2.605s 00:29:12.113 sys 0m8.740s 00:29:12.113 07:09:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.113 07:09:19 -- common/autotest_common.sh@10 -- # set +x 00:29:12.113 ************************************ 00:29:12.113 END TEST nvmf_abort_qd_sizes 00:29:12.113 ************************************ 00:29:12.372 07:09:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:12.372 07:09:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:12.372 07:09:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:12.372 07:09:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:12.372 07:09:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:12.372 07:09:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:12.372 07:09:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:12.372 07:09:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:12.372 07:09:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:12.372 07:09:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:12.372 07:09:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:12.372 07:09:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:12.372 07:09:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:12.372 07:09:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:12.372 07:09:19 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:29:12.372 07:09:19 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:29:12.372 07:09:19 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:29:12.372 07:09:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:12.372 07:09:19 -- common/autotest_common.sh@10 -- # set +x 00:29:12.372 07:09:19 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:29:12.372 07:09:19 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:29:12.372 07:09:19 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:29:12.372 07:09:19 -- common/autotest_common.sh@10 -- # set +x 00:29:14.273 INFO: APP EXITING 00:29:14.273 INFO: killing all VMs 00:29:14.273 INFO: killing vhost app 00:29:14.273 INFO: EXIT DONE 00:29:15.209 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:29:15.209 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:15.209 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:15.209 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:15.209 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:15.209 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:15.209 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:15.209 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:15.209 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:15.209 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:15.209 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:15.209 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:15.209 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:15.209 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:15.209 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:15.209 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:15.209 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:16.583 Cleaning 00:29:16.583 Removing: /var/run/dpdk/spdk0/config 00:29:16.583 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:16.583 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:16.583 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:16.583 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:16.583 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:16.583 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:16.583 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:16.583 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:16.583 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:16.583 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:16.583 Removing: /var/run/dpdk/spdk1/config 00:29:16.583 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:16.583 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:16.583 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:16.583 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:16.583 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:16.583 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:16.583 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:16.583 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:16.583 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:16.583 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:16.583 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:16.583 Removing: /var/run/dpdk/spdk2/config 00:29:16.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:16.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:16.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:16.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:16.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:16.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:16.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:16.583 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:16.583 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:16.583 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:16.583 Removing: /var/run/dpdk/spdk3/config 00:29:16.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:16.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:16.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:16.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:16.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:16.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:16.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:16.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:16.583 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:16.583 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:16.583 Removing: /var/run/dpdk/spdk4/config 00:29:16.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:16.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:16.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:16.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:16.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:16.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:16.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:16.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:16.583 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:16.583 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:16.583 Removing: /dev/shm/bdev_svc_trace.1 00:29:16.583 Removing: /dev/shm/nvmf_trace.0 00:29:16.583 Removing: /dev/shm/spdk_tgt_trace.pid2905992 00:29:16.583 Removing: /var/run/dpdk/spdk0 00:29:16.583 Removing: /var/run/dpdk/spdk1 00:29:16.583 Removing: /var/run/dpdk/spdk2 00:29:16.583 Removing: /var/run/dpdk/spdk3 00:29:16.583 Removing: /var/run/dpdk/spdk4 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2904300 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2905049 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2905992 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2906474 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2907698 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2908643 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2908946 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2909150 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2909487 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2909679 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2909842 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2910117 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2910306 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2910633 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2913172 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2913350 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2913646 00:29:16.583 Removing: /var/run/dpdk/spdk_pid2913786 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2914103 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2914243 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2914749 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2914933 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2915102 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2915244 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2915410 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2915553 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2916431 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2916585 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2916872 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2917084 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2917111 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2917293 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2917437 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2917598 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2917861 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2918022 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2918160 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2918446 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2918588 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2918749 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2919006 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2919177 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2919315 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2919564 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2919741 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2919904 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2920061 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2920327 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2920473 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2920644 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2920891 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2921062 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2921200 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2921474 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2921628 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2921783 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2922047 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2922210 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2922356 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2922632 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2922778 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2922940 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2923140 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2923359 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2923511 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2923704 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2923940 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2924103 00:29:16.584 Removing: /var/run/dpdk/spdk_pid2924263 00:29:16.842 Removing: /var/run/dpdk/spdk_pid2924526 00:29:16.842 Removing: /var/run/dpdk/spdk_pid2924676 00:29:16.842 Removing: /var/run/dpdk/spdk_pid2924956 00:29:16.842 Removing: /var/run/dpdk/spdk_pid2925024 00:29:16.842 Removing: /var/run/dpdk/spdk_pid2925227 00:29:16.843 Removing: /var/run/dpdk/spdk_pid2927425 00:29:16.843 Removing: /var/run/dpdk/spdk_pid2983028 00:29:16.843 Removing: /var/run/dpdk/spdk_pid2985685 00:29:16.843 Removing: /var/run/dpdk/spdk_pid2991680 00:29:16.843 Removing: /var/run/dpdk/spdk_pid2995031 00:29:16.843 Removing: /var/run/dpdk/spdk_pid2997546 00:29:16.843 Removing: /var/run/dpdk/spdk_pid2997956 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3003037 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3003315 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3005997 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3009875 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3012618 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3019251 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3024659 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3026011 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3026707 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3037193 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3039444 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3042284 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3043492 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3044868 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3045180 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3045403 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3045588 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3046527 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3048148 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3049045 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3049492 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3053117 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3056573 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3060227 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3084277 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3086935 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3090772 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3091868 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3092992 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3095571 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3097979 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3102351 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3102384 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3105297 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3105433 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3105590 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3105975 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3105980 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3107088 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3108424 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3110145 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3111354 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3112694 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3113913 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3117535 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3117995 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3119057 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3119661 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3123309 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3125355 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3128958 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3132599 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3136271 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3136702 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3137233 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3137655 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3138358 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3138946 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3139849 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3140401 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3143066 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3143335 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3147191 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3147372 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3149021 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3154175 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3154191 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3157185 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3158590 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3160007 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3160893 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3162212 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3163112 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3168632 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3169036 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3169440 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3171025 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3171940 00:29:16.843 Removing: /var/run/dpdk/spdk_pid3172349 00:29:16.843 Clean 00:29:16.843 killing process with pid 2876876 00:29:24.956 killing process with pid 2876873 00:29:24.956 killing process with pid 2876875 00:29:24.956 killing process with pid 2876874 00:29:24.956 07:09:31 -- common/autotest_common.sh@1436 -- # return 0 00:29:24.956 07:09:31 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:29:24.956 07:09:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:24.956 07:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:24.956 07:09:31 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:29:24.956 07:09:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:24.956 07:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:24.956 07:09:31 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:24.956 07:09:31 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:24.956 07:09:31 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:24.956 07:09:31 -- spdk/autotest.sh@394 -- # hash lcov 00:29:24.956 07:09:31 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:24.956 07:09:31 -- spdk/autotest.sh@396 -- # hostname 00:29:24.956 07:09:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:24.956 geninfo: WARNING: invalid characters removed from testname! 00:29:51.482 07:09:56 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:53.382 07:10:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:55.909 07:10:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:59.189 07:10:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:01.714 07:10:08 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:04.241 07:10:11 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:06.778 07:10:13 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:06.778 07:10:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.778 07:10:13 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:06.778 07:10:13 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.778 07:10:13 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.778 07:10:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.778 07:10:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.778 07:10:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.778 07:10:13 -- paths/export.sh@5 -- $ export PATH 00:30:06.778 07:10:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.778 07:10:13 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:06.778 07:10:13 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:06.778 07:10:13 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715490613.XXXXXX 00:30:06.778 07:10:13 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715490613.rYkS9T 00:30:06.778 07:10:13 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:06.778 07:10:13 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:06.778 07:10:13 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:06.778 07:10:13 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:06.778 07:10:13 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:06.778 07:10:13 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:06.778 07:10:13 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:30:06.778 07:10:13 -- common/autotest_common.sh@10 -- $ set +x 00:30:06.778 07:10:13 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:30:06.778 07:10:13 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:30:06.778 07:10:13 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:06.778 07:10:13 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:06.778 07:10:13 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:06.778 07:10:13 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:06.778 07:10:13 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:06.778 07:10:13 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:06.778 07:10:13 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:06.778 07:10:13 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:06.778 07:10:13 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:06.778 + [[ -n 2833962 ]] 00:30:06.778 + sudo kill 2833962 00:30:06.787 [Pipeline] } 00:30:06.804 [Pipeline] // stage 00:30:06.808 [Pipeline] } 00:30:06.824 [Pipeline] // timeout 00:30:06.829 [Pipeline] } 00:30:06.845 [Pipeline] // catchError 00:30:06.849 [Pipeline] } 00:30:06.866 [Pipeline] // wrap 00:30:06.871 [Pipeline] } 00:30:06.885 [Pipeline] // catchError 00:30:06.892 [Pipeline] stage 00:30:06.894 [Pipeline] { (Epilogue) 00:30:06.906 [Pipeline] catchError 00:30:06.907 [Pipeline] { 00:30:06.920 [Pipeline] echo 00:30:06.922 Cleanup processes 00:30:06.927 [Pipeline] sh 00:30:07.209 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:07.209 3183971 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:07.221 [Pipeline] sh 00:30:07.502 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:07.502 ++ grep -v 'sudo pgrep' 00:30:07.502 ++ awk '{print $1}' 00:30:07.502 + sudo kill -9 00:30:07.502 + true 00:30:07.513 [Pipeline] sh 00:30:07.793 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:17.785 [Pipeline] sh 00:30:18.068 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:18.068 Artifacts sizes are good 00:30:18.081 [Pipeline] archiveArtifacts 00:30:18.087 Archiving artifacts 00:30:18.327 [Pipeline] sh 00:30:18.608 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:18.622 [Pipeline] cleanWs 00:30:18.632 [WS-CLEANUP] Deleting project workspace... 00:30:18.632 [WS-CLEANUP] Deferred wipeout is used... 00:30:18.638 [WS-CLEANUP] done 00:30:18.640 [Pipeline] } 00:30:18.660 [Pipeline] // catchError 00:30:18.672 [Pipeline] sh 00:30:18.952 + logger -p user.info -t JENKINS-CI 00:30:18.960 [Pipeline] } 00:30:18.977 [Pipeline] // stage 00:30:18.982 [Pipeline] } 00:30:18.998 [Pipeline] // node 00:30:19.004 [Pipeline] End of Pipeline 00:30:19.038 Finished: SUCCESS